• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 36
  • 26
  • 16
  • 15
  • 11
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 249
  • 52
  • 51
  • 37
  • 30
  • 30
  • 28
  • 26
  • 22
  • 19
  • 17
  • 17
  • 17
  • 16
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Perspective vol. 11 no. 7 (Dec 1977) / Perspective: Newsletter of the Association for the Advancement of Christian Scholarship

Van Dyk, John, Hielema, Evelyn Kuntz, Hart, Hendrik, Campbell, Dave 26 March 2013 (has links)
No description available.
242

Responding to Alienating Trends in Modern Education and Civilization by Remembering our Responsibility to Metaphysics and Ontological Education: Answering to the Platonic Essence of Education

Karumanchiri, Arun 01 January 2011 (has links)
This thesis explores the most basic purpose of education and how it can be advanced. To begin to analyze this fundamental area of concern, this thesis associates notions of education with notions and experiences of truth and authenticity, which vary historically and culturally. A phenomenological analysis, featuring the philosophy of Heidegger, uncovers the basic conditions of human experience and discourse, which have become bent upon technology and jargon in the West. He draws on Plato's account of the 'essence of education' in the Cave Allegory, which underscores human agency in light of truth as unhiddenness. Heidegger calls for ontological education, which advances authenticity as it preserves individuals as codisclosing, historical beings.
243

Responding to Alienating Trends in Modern Education and Civilization by Remembering our Responsibility to Metaphysics and Ontological Education: Answering to the Platonic Essence of Education

Karumanchiri, Arun 01 January 2011 (has links)
This thesis explores the most basic purpose of education and how it can be advanced. To begin to analyze this fundamental area of concern, this thesis associates notions of education with notions and experiences of truth and authenticity, which vary historically and culturally. A phenomenological analysis, featuring the philosophy of Heidegger, uncovers the basic conditions of human experience and discourse, which have become bent upon technology and jargon in the West. He draws on Plato's account of the 'essence of education' in the Cave Allegory, which underscores human agency in light of truth as unhiddenness. Heidegger calls for ontological education, which advances authenticity as it preserves individuals as codisclosing, historical beings.
244

Výpočet tepelného pole rozvaděče UniGear 500R / Calculation of heat and force field UniGear ZS1

Mokrý, Lukáš January 2015 (has links)
The aim of my work is to describe high-voltage switchgear type UniGear 500R, which is part of UniGear switchgears family. I will focus on heating issue of one 500R unit and its parts during operation. Maximum values of this heating is limited by standards and can´t be exceed to ensure safe and reliable operation. That is why the heating tests are necessary part of designing and developing switchgears. Calculation will be made by two different ways. First is classic one-pole heating net method and second is numerical simulation in Solidworks flow simulation program. Except the theoretical description there will be presented also used 3D model and explanation of both method, used to calculation and simulation. Last point of this work is measuring of this type of switchgear and getting real data. The main point there is to compare measured values with values calculated and decide if is possible to simulate tests with appropriate accuracy. Then would be also possible to substitute the real test in laboratory, which costs many thousand crowns and takes lots hours of time. This work is collaborated with EJF division of ABB Company, where I am employed. Heating issues in this company is always on process, because of developing and improving of their products. So this work could be helpful in this field. ABB provides all materials needed, especially technical catalogues, 3D model and final values from laboratory measuring. Support from college faculty is mainly in study consultations and proposing of calculations making. In the end of work will be make final comparison and evaluation of achieved results.
245

Language Policy and Bilingual Education for Immigrant Students at Public Schools in Japan

Asakura, Naomi 21 September 2015 (has links)
This thesis discusses the current Japanese language (nihongo) education for immigrant students at public schools in Japan and provides recommendations through the study of language policy and the comparison of bilingual education in the United States. The current situation of a decreasing birth rate and increasing aging population in Japan has led to the acceptance of more foreign workers. Due to this change, language education in Japan has increasing development. The focus of chapter 1 is on the theories of language policy. This paper particularly focuses on the ideas of Wright (2004), Neustupný (2006), Spolsky (2004), and Cooper (1989), and discusses similarities and differences between them. By applying these theories to language policy in Japan, chapter 1 shows how language policy changed throughout Japanese history. Chapter 2 discusses the current environment surrounding immigrant students. It includes a description not only of the expanding population of foreign students, but also the history of Japanese language education and the laws related to it. This chapter also presents the present movement of language policy in Japan and how the movement affects Japanese language education for language minority students. Chapter 3 compares bilingual education in the United States to bilingual education in Japan, and makes three suggestions to improve Japanese language education at public schools in Japan, particularly addressing the classification of language levels for immigrant students, teaching styles, and the limitation of qualified bilingual teachers.
246

“I Would Prevent You from Further Violence”: Women, Pirates, and the Problem of Violence in the Antebellum American Imagination

Avila, Beth Eileen January 2016 (has links)
No description available.
247

"The earth is a tomb and man a fleeting vapour": The Roots of Climate Change in Early American Literature

Keeler, Kyle B. 10 April 2018 (has links)
No description available.
248

Constrained optimization for machine learning : algorithms and applications

Gallego-Posada, Jose 06 1900 (has links)
Le déploiement généralisé de modèles d’apprentissage automatique de plus en plus performants a entraîné des pressions croissantes pour améliorer la robustesse, la sécurité et l’équité de ces modèles—-souvent en raison de considérations réglementaires et éthiques. En outre, la mise en œuvre de solutions d’intelligence artificielle dans des applications réelles est limitée par leur incapacité actuelle à garantir la conformité aux normes industrielles et aux réglementations gouvernementales. Les pipelines standards pour le développement de modèles d’apprentissage automatique adoptent une mentalité de “construire maintenant, réparer plus tard”, intégrant des mesures de sécurité a posteriori. Cette accumulation continue de dette technique entrave le progrès du domaine à long terme. L’optimisation sous contraintes offre un cadre conceptuel accompagné d’outils algorithmiques permettant d’imposer de manière fiable des propriétés complexes sur des modèles d’apprentissage automatique. Cette thèse appelle à un changement de paradigme dans lequel les contraintes constituent une partie intégrante du processus de développement des modèles, visant à produire des modèles d’apprentissage automatique qui sont intrinsèquement sécurisés par conception. Cette thèse offre une perspective holistique sur l’usage de l’optimisation sous contraintes dans les tâches d’apprentissage profond. Nous examinerons i) la nécessité de formulations contraintes, ii) les avantages offerts par le point de vue de l’optimisation sous contraintes, et iii) les défis algorithmiques qui surgissent dans la résolution de ces problèmes. Nous présentons plusieurs études de cas illustrant l’application des techniques d’optimisation sous contraintes à des problèmes courants d’apprentissage automatique. Dans la Contribution I, nous plaidons en faveur de l’utilisation des formulations sous contraintes en apprentissage automatique. Nous soutenons qu’il est préférable de gérer des régularisateurs interprétables via des contraintes explicites plutôt que par des pénalités additives, particulièrement lorsqu’il s’agit de modèles non convexes. Nous considérons l’entraînement de modèles creux avec une régularisation L0 et démontrons que i) il est possible de trouver des solutions réalisables et performantes à des problèmes de grande envergure avec des contraintes non convexes ; et que ii) l’approche contrainte peut éviter les coûteux ajustements par essais et erreurs inhérents aux techniques basées sur les pénalités. La Contribution II approfondit la contribution précédente en imposant des contraintes explicites sur le taux de compression atteint par les Représentations Neuronales Implicites—-une classe de modèles visant à entreposer efficacement des données (telles qu’une image) dans les paramètres d’un réseau neuronal. Dans ce travail, nous nous concentrons sur l’interaction entre la taille du modèle, sa capacité représentationnelle, et le temps d’entraînement requis. Plutôt que de restreindre la taille du modèle à un budget fixe (qui se conforme au taux de compression requis), nous entraînons un modèle surparamétré et creux avec des contraintes de taux de compression. Cela nous permet d’exploiter la puissance de modèles plus grands pour obtenir de meilleures reconstructions, plus rapidement, sans avoir à nous engager à leur taux de compression indésirable. La Contribution III présente les avantages des formulations sous contraintes dans une application réaliste de la parcimonie des modèles avec des contraintes liées à l’équité non différentiables. Les performances des réseaux neuronaux élagués se dégradent de manière inégale entre les sous-groupes de données, nécessitant ainsi l’utilisation de techniques d’atténuation. Nous proposons une formulation qui impose des contraintes sur les changements de précision du modèle dans chaque sous-groupe, contrairement aux travaux antérieurs qui considèrent des contraintes basées sur des métriques de substitution (telles que la perte du sous-groupe). Nous abordons les défis de la non-différentiabilité et de la stochasticité posés par nos contraintes proposées, et démontrons que notre méthode s’adapte de manière fiable aux problèmes d’optimisation impliquant de grands modèles et des centaines de sous-groupes. Dans la Contribution IV, nous nous concentrons sur la dynamique de l’optimisation lagrangienne basée sur le gradient, une technique populaire pour résoudre les problèmes sous contraintes non convexes en apprentissage profond. La nature adversariale du jeu min-max lagrangien le rend sujet à des comportements oscillatoires ou instables. En nous basant sur des idées tirées de la littérature sur les régulateurs PID, nous proposons un algorithme pour modifier les multiplicateurs de Lagrange qui offre une dynamique d’entraînement robuste et stable. Cette contribution met en place les bases pour que les praticiens adoptent et mettent en œuvre des approches sous contraintes avec confiance dans diverses applications réelles. Dans la Contribution V, nous fournissons un aperçu de Cooper : une bibliothèque pour l’optimisation sous contraintes basée sur le lagrangien dans PyTorch. Cette bibliothèque open-source implémente toutes les contributions principales présentées dans les chapitres précédents et s’intègre harmonieusement dans le cadre PyTorch. Nous avons développé Cooper dans le but de rendre les techniques d’optimisation sous contraintes facilement accessibles aux chercheurs et praticiens de l’apprentissage automatique. / The widespread deployment of increasingly capable machine learning models has resulted in mounting pressures to enhance the robustness, safety and fairness of such models--often arising from regulatory and ethical considerations. Further, the implementation of artificial intelligence solutions in real-world applications is limited by their current inability to guarantee compliance with industry standards and governmental regulations. Current standard pipelines for developing machine learning models embrace a “build now, fix later” mentality, retrofitting safety measures as afterthoughts. This continuous incurrence of technical debt hinders the progress of the field in the long-term. Constrained optimization offers a conceptual framework accompanied by algorithmic tools for reliably enforcing complex properties on machine learning models. This thesis calls for a paradigm shift in which constraints constitute an integral part of the model development process, aiming to produce machine learning models that are inherently secure by design. This thesis provides a holistic perspective on the use of constrained optimization in deep learning tasks. We shall explore i) the need for constrained formulations, ii) the advantages afforded by the constrained optimization standpoint and iii) the algorithmic challenges arising in the solution of such problems. We present several case-studies illustrating the application of constrained optimization techniques to popular machine learning problems. In Contribution I, we advocate for the use of constrained formulations in machine learning. We argue that it is preferable to handle interpretable regularizers via explicit constraints, rather than using additive penalties, specially when dealing with non-convex models. We consider the training of sparse models with L0-regularization and demonstrate that i) it is possible to find feasible, well-performing solutions to large-scale problems with non-convex constraints; and that ii) the constrained approach can avoid the costly trial-and-error tuning inherent to penalty-based techniques. Contribution II expands on the previous contribution by imposing explicit constraints on the compression-rate achieved by Implicit Neural Representations—-a class of models that aim to efficiently store data (such as an image) within a neural network’s parameters. In this work we concentrate on the interplay between the model size, its representational capacity and the required training time. Rather than restricting the model size to a fixed budget (that complies with the required compression rate), we train an overparametrized, sparse model with compression-rate constraints. This allows us to exploit the power of larger models to achieve better reconstructions, faster; without having to commit to their undesirable compression rate. Contribution III showcases the advantages of constrained formulations in a realistic model sparsity application with non-differentiable fairness-related constraints. The performance of pruned neural networks degrades unevenly across data sub-groups, thus requiring the use of mitigation techniques. We propose a formulation that imposes constraints on changes in the model accuracy in each sub-group, in contrast to prior work which considers constraints based on surrogate metrics (such as the sub-group loss). We address the non-differentiability and stochasticity challenges posed by our proposed constraints, and demonstrate that our method scales reliably to optimization problems involving large models and hundreds of sub-groups. In Contribution IV, we focus on the dynamics of gradient-based Lagrangian optimization, a popular technique for solving the non-convex constrained problems arising in deep learning. The adversarial nature of the min-max Lagrangian game makes it prone to oscillatory or unstable behaviors. Based on ideas from the PID control literature, we propose an algorithm for updating the Lagrange multipliers which yields robust, stable training dynamics. This contribution lays the groundwork for practitioners to adopt and implement constrained approaches confidently in diverse real-world applications. In Contribution V, we provide an overview of Cooper: a library for Lagrangian-based constrained optimization in PyTorch. This open-source library implements all the core contributions presented in the preceding chapters and integrates seamlessly with the PyTorch framework. We developed Cooper with the goal of making constrained optimization techniques readily available to machine learning researchers and practitioners.
249

Analýza možností zvýšení účinnosti asynchronních motorů / Analysis of possibilities to improvement induction motors efficiency

Novotný, Jiří January 2014 (has links)
In the first part of the master’s thesis dealing with the increasing efficiency of induction motors there are briefly presented basic information about induction motors, followed by an overview of the losses of induction motors. The next part deals with the ways to increase efficiency of induction motors without increasing tooling costs. The practical part consists of four measurements of four induction motors, with their various mechanical adjustments to make comparing benefits of these modifications possible. The measured results are compared by a finite element method in Maxwell 2D Design program, in which the same motors are simulated as measured. Theoretical knowledge about the increase of efficiency is practically applied while being implemented in the simulations.

Page generated in 0.0922 seconds