• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach

Kartal Koc, Elcin 01 September 2012 (has links) (PDF)
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a well-known nonparametric regression technique to approximate the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear basis functions which are separated from each other with breaking points (knots) for function estimation. The model estimating function is generated in two stepwise procedures: forward selection and backward elimination. In the first step, a general model including too many basis functions so the knot points are generated / and in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
12

A Computational Simulation Model for Predicting Infectious Disease Spread using the Evolving Contact Network Algorithm

Munkhbat, Buyannemekh 02 July 2019 (has links)
Commonly used simulation models for predicting outbreaks of re-emerging infectious diseases (EIDs) take an individual-level or a population-level approach to modeling contact dynamics. These approaches are a trade-off between the ability to incorporate individual-level dynamics and computational efficiency. Agent-based network models (ABNM) use an individual-level approach by simulating the entire population and its contact structure, which increases the ability of adding detailed individual-level characteristics. However, as this method is computationally expensive, ABNMs use scaled-down versions of the full population, which are unsuitable for low prevalence diseases as the number of infected cases would become negligible during scaling-down. Compartmental models use differential equations to simulate population-level features, which is computationally inexpensive and can model full-scale populations. However, as the compartmental model framework assumes random mixing between people, it is not suitable for diseases where the underlying contact structures are a significant feature of disease epidemiology. Therefore, current methods are unsuitable for simulating diseases that have low prevalence and where the contact structures are significant. The conceptual framework for a new simulation method, Evolving Contact Network Algorithm (ECNA), was recently proposed to address the above gap. The ECNA combines the attributes of ABNM and compartmental modeling. It generates a contact network of only infected persons and their immediate contacts, and evolves the network as new persons become infected. The conceptual framework of the ECNA is promising for application to diseases with low prevalence and where contact structures are significant. This thesis develops and tests different algorithms to advance the computational capabilities of the ECNA and its flexibility to model different network settings. These features are key components that determine the feasibility of ECNA for application to disease prediction. Results indicate that the ECNA is nearly 20 times faster than ABNM when simulating a population of size 150,000 and flexible for modeling networks with two contact layers and communities. Considering uncertainties in epidemiological features and origin of future EIDs, there is a significant need for a computationally efficient method that is suitable for analyses of a range of potential EIDs at a global scale. This work holds promise towards the development of such a model.
13

A Self-Consistent-Field Perturbation Theory of Nuclear Spin Coupling Constants

Blizzard, Alan Cyril 05 1900 (has links)
Scope and Content stated in the place of the abstract. / The principal methods of calculating nuclear spin coupling constants by applying perturbation theory to molecular orbital wavefunctions for the electronic structure of molecules are discussed. A new method employing a self-consistent-field perturbation theory (SCFPT) is then presented and compared with the earlier methods. In self-consistent-field (SCF) methods, the interaction of an electron with other electrons in a molecule is accounted for by treating the other electrons as an average distribution of negative charge. However, this charge distribution cannot be calculated until the electron-electron interactions themselves are known. In the SCF method, an initial charge distribution is assumed and then modified in an iterative calculation until the desired degree of self-consistency is attained. In most previous perturbation methods, these electron interactions are not taken into account in a self consistent manner in calculating the perturbed wavefunction even when SCF wavefunctions are used to describe the unperturbed molecule. The main advantage of the new SCFPT approach is that it treats the interactions between electrons with the same degree of self-consistency in the perturbed wavefunction as in the unperturbed wavefunction. The SCFPT method offers additional advantages due to its computational efficiency and the direct manner in which it treats the perturbations. This permits the theory to be developed for the orbital and dipolar contributions to nuclear spin coupling as well as for the more commonly treated contact interaction. In this study, the SCFPT theory is used with the Intermediate Neglect of Differential Overlap (INDO) molecular orbital approximation to calculate a number of coupling constants involving 13c and 19F. The usually neglected orbital and dipolar terms are found to be very important in FF and CF coupling. They can play a decisive role in explaining the experimental trend of JCF among a series of compounds. The orbital interaction is found to play a significant role in certain CC couplings. Generally good agreement is obtained between theory and experiment except for JCF and JFF in oxalyl fluoride and the incorrect signs obtained for cis JFF in fluorinated ethylenes. The nature of the theory permits the latter discrepancy to be rationalized in terms of computational details. The value of JFF in difluoracetjc acid is predicted to be -235 Hz. The SCFPT method is used with a theory of dπ - pπ bonding to predict in agreement with experiment that JCH in acetylene will decrease when that molecule is bound in a transition metal complex. / Thesis / Doctor of Philosophy (PhD)
14

COUPLED FINITE ELEMENT AND EXTENDED-QD CIRCUIT MODEL FOR INDUCTION MACHINE ANALYSIS

Ayesha Sayed (9166721) 10 September 2022 (has links)
<div>The design of high-performance squirrel-cage induction motors (IMs) entails the capability to predict the motor efficiency map with high accuracy over an operating range. In particular, modeling high-frequency rotor bar currents becomes important for loss analysis in high-speed applications. In theory, it is possible to analyze an IM using time-stepping finite element analysis (TS-FEA); however, this is not viable due to computational limitations. To bridge this gap, we set forth a computationally efficient method to predict the rotor cage loss. This is achieved by coupling magnetostatic FEA with an extended qd-circuit model of the cage. The circuit model is derived in a synchronously rotating reference frame. The proposed IM model includes the effects of saturation, winding and slot harmonics, as well as nonuniform current distribution in the rotor bars. The proposed model is validated by comparing the estimated cage loss and computational effort against a 2-D nonlinear TS-FEA. </div><div>The proposed electromagnetic model is finally coupled to a linear thermal model to predict IM performance over an operating range and the results are validated using experiments. The proposed model is further extended to identify detailed flux density waveforms in the iron to estimate core loss. The flux density waveforms are obtained by conducting a set of magnetostatic FEA studies using the derived rotor bar currents.</div>
15

Numerical methods for computationally efficient and accurate blood flow simulations in complex vascular networks: Application to cerebral blood flow

Ghitti, Beatrice 04 May 2023 (has links)
It is currently a well-established fact that the dynamics of interacting fluid compartments of the central nervous system (CNS) may play a role in the CNS fluid physiology and pathology of a number of neurological disorders, including neurodegenerative diseases associated with accumulation of waste products in the brain. However, the mechanisms and routes of waste clearance from the brain are still unclear. One of the main components of this interacting cerebral fluids dynamics is blood flow. In the last decades, mathematical modeling and fluid dynamics simulations have become a valuable complementary tool to experimental approaches, contributing to a deeper understanding of the circulatory physiology and pathology. However, modeling blood flow in the brain remains a challenging and demanding task, due to the high complexity of cerebral vascular networks and the difficulties that consequently arise to describe and reproduce the blood flow dynamics in these vascular districts. The first part of this work is devoted to the development of efficient numerical strategies for blood flow simulations in complex vascular networks. In cardiovascular modeling, one-dimensional (1D) and lumped-parameter (0D) models of blood flow are nowadays well-established tools to predict flow patterns, pressure wave propagation and average velocities in vascular networks, with a good balance between accuracy and computational cost. Still, the purely 1D modeling of blood flow in complex and large networks can result in computationally expensive simulations, posing the need for extremely efficient numerical methods and solvers. To address these issues, we develop a novel modeling and computational framework to construct hybrid networks of coupled 1D and 0D vessels and to perform computationally efficient and accurate blood flow simulations in such networks. Starting from a 1D model and a family of nonlinear 0D models for blood flow, with either elastic or viscoelastic tube laws, this methodology is based on (i) suitable coupling equations ensuring conservation principles; (ii) efficient numerical methods and numerical coupling strategies to solve 1D, 0D and hybrid junctions of vessels; (iii) model selection criteria to construct hybrid networks, which provide a good trade-off between accuracy in the predicted results and computational cost of the simulations. By applying the proposed hybrid network solver to very complex and large vascular networks, we show how this methodology becomes crucial to gain computational efficiency when solving networks and models where the heterogeneity of spatial and/or temporal scales is relevant, still ensuring a good level of accuracy in the predicted results. Hence, the proposed hybrid network methodology represents a first step towards a high-performance modeling and computational framework to solve highly complex networks of 1D-0D vessels, where the complexity does not only depend on the anatomical detail by which a network is described, but also on the level at which physiological mechanisms and mechanical characteristics of the cardiovascular system are modeled. Then, in the second part of the thesis, we focus on the modeling and simulation of cerebral blood flow, with emphasis on the venous side. We develop a methodology that, departing from the high-resolution MRI data obtained from a novel in-vivo microvascular imaging technique of the human brain, allows to reconstruct detailed subject-specific cerebral networks of specific vascular districts which are suitable to perform blood flow simulations. First, we extract segmentations of cerebral districts of interest in a way that the arterio-venous separation is addressed and the continuity and connectivity of the vascular structures is ensured. Equipped with these segmentations, we propose an algorithm to extract a network of vessels suitable and good enough, i.e. with the necessary properties, to perform blood flow simulations. Here, we focus on the reconstruction of detailed venous vascular networks, given that the anatomy and patho-physiology of the venous circulation is of great interest from both clinical and modeling points of view. Then, after calibration and parametrization of the MRI-reconstructed venous networks, blood flow simulations are performed to validate the proposed methodology and assess the ability of such networks to predict physiologically reasonable results in the corresponding vascular territories. From the results obtained we conclude that this work represents a proof-of-concept study that demonstrates that it is possible to extract subject-specific cerebral networks from the novel high-resolution MRI data employed, setting the basis towards the definition of an effective processing pipeline for detailed blood flow simulations from subject-specific data, to explore and quantify cerebral blood flow dynamics, with focus on venous blood drainage.
16

[pt] A AUTOEFICÁCIA: FIO CONDUTOR ENTRE AS PRÁTICAS PEDAGÓGICAS E AS TECNOLOGIAS DIGITAIS / [en] SELF-EFFICACY: THE GUIDING THREAD BETWEEN PEDAGOGICAL PRACTICES AND DIGITAL TECHNOLOGIES

ELIS RENATA DE BRITTO SANTOS 19 August 2019 (has links)
[pt] As tecnologias digitais estão promovendo várias transformações na sociedade, mas na educação essas ferramentas não ressignificaram as práticas pedagógicas. Este estudo entende as tecnologias digitais como artefatos culturais imersos nos nossos hábitos, costumes e crenças. A pesquisa visa compreender a perspectiva dos docentes sobre as tecnologias digitais a partir de um mergulho nas suas crenças pedagógicas, em particular a crença de autoeficácia computacional docente. O estudo quantitativo e qualitativo se desenvolveu em duas partes: entrevistas de 64 professores em 8 escolas públicas de ensino fundamental do Município do Rio de Janeiro, entre 2014-2016. E em 2018 ocorreram dois grupos focais em duas escolas selecionadas para prosseguir o estudo porque os docentes apresentaram percepções da tecnologia próximo de artefato cultural. Os resultados das entrevistas demonstraram que a maioria dos docentes (63) utilizam as tecnologias no âmbito pessoal, mas quando verificamos o ambiente escolar esse grupo diminuiu para 47. E apenas 17 destes conseguiram modificar suas práticas pedagógicas usando as TIC. A experiência direta foi a fonte de informação da autoeficácia mais relevante. Na análise dos grupos focais, novamente a experiência direta se destaca, junto com a persuasão social e a preocupação com o aluno (variável contextual). Ao final do estudo nota-se que a autoeficácia computacional docente não representa uma confiança contínua, muito pelo contrário é influenciada por outros fatores e por isso está em constante transformação. Para pesquisas futuras o aprofundamento do conhecimento sobre as crenças de autoeficácia com foco nas TIC é imprescindível, principalmente, a sua relação com outras incógnitas. / [en] Digital technologies are promoting various transformations in society, but in education these tools did not re-signify pedagogical practices. This study understands digital technologies as cultural artifacts immersed in our habits, customs and beliefs. The research aims to understand the perspective of teachers on digital technologies from a dip in their pedagogical beliefs, in particular the belief of teacher computational self-efficacy. The quantitative and qualitative study was developed in two parts: interviews of 64 teachers in 8 public elementary schools of the Municipality of Rio de Janeiro, between 2014-2016. And in 2018 two focus groups took place in two schools selected to continue the study because teachers presented perceptions of the near-technology of cultural artifact. The results of the interviews showed that most of the teachers (63) use the technologies in the personal scope, but when we verified the school environment this group decreased to 47. And only 17 of these managed to modify their pedagogical practices using the TIC. Direct experience was the most relevant source of self-efficacy information. In the analysis of focus groups, again direct experience stands out, along with social persuasion and concern with the student (contextual variable). At the end of the study it is noticed that the teacher s computational self-efficacy does not represent a continuous trust, on the contrary it is influenced by other factors and therefore is in constant transformation. For future research, the deepening of the knowledge about the beliefs of self-efficacy with a focus on the TIC is essential, mainly, its relation with other unknowns.
17

Near Realtime Object Detection : Optimizing YOLO Models for Efficiency and Accuracy for Computer Vision Applications

Abo Khalaf, Mulham January 2024 (has links)
Syftet med denna studie är att förbättra effektiviteten och noggrannheten hos YOLO-modeller genom att optimera dem, särskilt när de står inför begränsade datorresurser. Det akuta behovet av objektigenkänning i nära realtid i tillämpningar som övervakningssystem och autonom körning understryker betydelsen av bearbetningshastighet och exceptionell noggrannhet. Avhandlingen fokuserar på svårigheterna med att implementera komplexa modeller för objektidentifiering på enheter med låg kapacitet, nämligen Jetson Orin Nano. Den föreslår många optimeringsmetoder för att övervinna dessa hinder. Vi utförde flera försök och gjorde metodologiska förbättringar för att minska bearbetningskraven och samtidigt bibehålla en stark prestanda för objektdetektering. Viktiga komponenter i forskningen inkluderar noggrann modellträning, användning av bedömningskriterier och undersökning av optimeringseffekter på modellprestanda i verkliga miljöer. Studien visar att det är möjligt att uppnå optimal prestanda i YOLO-modeller trots begränsade resurser, vilket ger betydande framsteg inom datorseende och maskininlärning. / The objective of this study is to improve the efficiency and accuracy of YOLO models by optimizing them, particularly when faced with limited computing resources. The urgent need for near realtime object recognition in applications such as surveillance systems and autonomous driving underscores the significance of processing speed and exceptional accuracy. The thesis focuses on the difficulties of implementing complex object identification models on low-capacity devices, namely the Jetson Orin Nano. It suggests many optimization methods to overcome these obstacles. We performed several trials and made methodological improvements to decrease processing requirements while maintaining strong object detecting performance. Key components of the research include meticulous model training, the use of assessment criteria, and the investigation of optimization effects on model performance in reallife settings. The study showcases the feasibility of achieving optimal performance in YOLO models despite limited resources, bringing substantial advancements in computer vision and machine learning.
18

Improving Execution Speed of Models Implemented in NetLogo

Railsback, Steven, Ayllón, Daniel, Berger, Uta, Grimm, Volker, Lytinen, Steven, Sheppard, Colin, Thiele, Jan C. 30 March 2017 (has links) (PDF)
NetLogo has become a standard platform for agent-based simulation, yet there appears to be widespread belief that it is not suitable for large and complex models due to slow execution. Our experience does not support that belief. NetLogo programs often do run very slowly when written to minimize code length and maximize clarity, but relatively simple and easily tested changes can almost always produce major increases in execution speed. We recommend a five-step process for quantifying execution speed, identifying slow parts of code, and writing faster code. Avoiding or improving agent filtering statements can often produce dramatic speed improvements. For models with extensive initialization methods, reorganizing the setup procedure can reduce the initialization effort in simulation experiments. Programming the same behavior in a different way can sometimes provide order-of-magnitude speed increases. For models in which most agents do nothing on most time steps, discrete event simulation—facilitated by the time extension to NetLogo—can dramatically increase speed. NetLogo’s BehaviorSpace tool makes it very easy to conduct multiple-model-run experiments in parallel on either desktop or high performance cluster computers, so even quite slow models can be executed thousands of times. NetLogo also is supported by efficient analysis tools, such as BehaviorSearch and RNetLogo, that can reduce the number of model runs and the effort to set them up for (e.g.) parameterization and sensitivity analysis.
19

Improving Execution Speed of Models Implemented in NetLogo

Railsback, Steven, Ayllón, Daniel, Berger, Uta, Grimm, Volker, Lytinen, Steven, Sheppard, Colin, Thiele, Jan C. 30 March 2017 (has links)
NetLogo has become a standard platform for agent-based simulation, yet there appears to be widespread belief that it is not suitable for large and complex models due to slow execution. Our experience does not support that belief. NetLogo programs often do run very slowly when written to minimize code length and maximize clarity, but relatively simple and easily tested changes can almost always produce major increases in execution speed. We recommend a five-step process for quantifying execution speed, identifying slow parts of code, and writing faster code. Avoiding or improving agent filtering statements can often produce dramatic speed improvements. For models with extensive initialization methods, reorganizing the setup procedure can reduce the initialization effort in simulation experiments. Programming the same behavior in a different way can sometimes provide order-of-magnitude speed increases. For models in which most agents do nothing on most time steps, discrete event simulation—facilitated by the time extension to NetLogo—can dramatically increase speed. NetLogo’s BehaviorSpace tool makes it very easy to conduct multiple-model-run experiments in parallel on either desktop or high performance cluster computers, so even quite slow models can be executed thousands of times. NetLogo also is supported by efficient analysis tools, such as BehaviorSearch and RNetLogo, that can reduce the number of model runs and the effort to set them up for (e.g.) parameterization and sensitivity analysis.
20

The role of continual learning and adaptive computation in improving computational efficiency of deep learning

Gupta, Kshitij 01 1900 (has links)
Au cours de la dernière décennie, des progrès significatifs ont été réalisés dans le domaine de l’IA, principalement grâce aux progrès de l’apprentissage automatique, de l’apprentissage profond et de l’utilisation de modèles à grande échelle. Cependant, à mesure que ces modèles évoluent, ils présentent de nouveaux défis en termes de gestion de grands ensembles de données et d’efficacité informatique. Cette thèse propose des approches pour réduire les coûts de calcul de la formation et de l’inférence dans les systèmes d’intelligence artificielle (IA). Plus précisément, ce travail étudie les techniques d’apprentissage continu et de calcul adaptatif, démontrant des stratégies possibles pour préserver les niveaux de performance de ces systèmes tout en réduisant considérablement les coûts de formation et d’inférence. Les résultats du premier article montrent que les modèles de base peuvent être continuellement pré-entraînés grâce à une méthode d’échauffement et de relecture, ce qui réduit considérable- ment les coûts de calcul de l’entraînement tout en préservant les performances par rapport à un entraînement à partir de zéro. Par la suite, la thèse étudie comment les stratégies de calcul adaptatif, lorsqu’elles sont combinées avec la mémoire, peuvent être utilisées pour créer des agents d’IA plus efficaces au moment de l’inférence pour des tâches de raisonnement complexes, telles que le jeu stratégique de Sokoban. Nos résultats montrent que les modèles peuvent offrir des per- formances similaires ou améliorées tout en utilisant beaucoup moins de ressources de calcul. Les résultats de cette étude ont de vastes implications pour l’amélioration de l’efficacité in- formatique des systèmes d’IA, soutenant à terme le développement de technologies d’IA plus abordables, accessibles et efficaces. / Over the past decade, significant progress has been made by the field of AI, primarily due to advances in machine learning, deep learning, and the usage of large scale models. However, as these models scale, they present new challenges with respect to handling large datasets and being computationally efficient. This thesis proposes approaches to reducing computational costs of training and inference in artificial intelligence (AI) systems. Specifically, this work investigates how Continual Learning and Adaptive Computation techniques can be used to reducing training and inference costs while preserving the perfor- mance levels of these systems . The findings of the first article show that foundation models can be continually pre-trained through a method of warm-up and replay, which significantly decreases training computational costs while preserving performance compared to training from scratch. Subsequently, the thesis investigates how adaptive computation strategies, when com- bined with memory, can be utilized to create more computationally efficient AI agents at inference time for complex reasoning tasks, such as the strategic game of Sokoban. Our results exhibit that models can deliver similar or improved performances while using signifi- cantly fewer computational resources. Findings from this study have broad implications for improving the computational efficiency of AI systems, ultimately supporting the development of more affordable, accessible, and efficient AI technologies.

Page generated in 0.0911 seconds