• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 136
  • 76
  • 33
  • 23
  • 16
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 642
  • 287
  • 251
  • 78
  • 75
  • 73
  • 64
  • 62
  • 60
  • 58
  • 58
  • 54
  • 53
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Planning for Slow Growth and Decline in Mid-Sized U.S. Cities / Planering för svag tillväxt och nedgång i mellanstora städer i USA

McKeag, Alex January 2019 (has links)
While many major cities in the United States are once again gaining population, growing their economies, and attracting talent, many small and mid-sized cities are in decline. The reasons for this growing disparity are multi-faceted. A growing body of research has been exploring planning challenges in declining cities and towns. This body of research—often called “shrinking cities” and “urban shrinkage” research—is premised on the belief that many declining places will continue to shed population, jobs, and industries, and planning smartly for this decline is the only sensible path forward. So far, research in the U.S. has focused primarily on Northeast and Midwest cities where population and industrial decline has been the most severe. Less scholarship has studied places that have declined more slowly and more recently. This thesis examines the current trends impacting the decline of mid-sized cities in the Midwestern United States, focusing on four cities in the State of Illinois. It also explores whether these cities are ready to consider the possibility that population decline is not temporary and change their planning strategies accordingly. Finally, this thesis will introduce an emerging paradigm in contemporary urban planning practice that fuses growth and decline strategies, to prepare mid-sized cities for an uncertain demographic and economic future.
462

”Alla barn har gnistan inom sig, det enda som behövs är att lyckas tända den”

Alagic, Medina January 2017 (has links)
The main purpose of this study was to increase the knowledge about how it is to be a newly arrived refugee in the Swedish school as well as what role the Swedish school plays in the integration process. The respondents in this study have all arrived to Sweden in the past four years as refugees from war-torn countries. In total seven young people in the age 15-16 took part in this study. How do the newly arrived youths experience the time in the preparatory class and the ordinary class? Which social networks does the newly arrived youths have in and outside school? How can the school help them by giving them skills and ability to integrate into the Swedish society? To find the answers to these questions I made a qualitative study. The study is based on interviews with seven young people who are attending Swedish upper secondary (8th or 9th grade) or are in preparatory class. To be able to analyze my qualitative interviews I used three theories; stigmatization, social identity and sociocultural perspective. The results of the survey show that the newly arrived students both have positive and negative experiences of the Swedish school. The newly arrived students experience the time in preparatory class as safe, they have a big community with their classmates and get support and help from their teachers. At the same time, they feel excluded, both in preparatory and ordinary class, due to the lack of wider social networks. According to the students the Swedish school has more to improve when it comes to giving the newly arrived youth’s greater opportunities in the integration process.
463

[en] MATHEMATICAL MODELS FOR THE ZIKA EPIDEMIC / [pt] MODELOS MATEMÁTICOS PARA A EPIDEMIA DO ZIKA

ERICK MANUEL DELGADO MOYA 18 March 2021 (has links)
[pt] Zika Vírus (ZIKV) é um vírus transmitido pelos mosquitos Aedes aegypti (mesmo transmissor da dengue e da febre chikungunya) e o Aedes albopictus. O contágio principal pelo ZIKV se dá pela picada do mosquito que, após se alimentar do sangue de alguém contaminado, pode transportar o ZIKV durante toda a sua vida, transmitindo a doença para uma população que não possui anticorpos contra ele. Também pode ser transmitido através de relação sexual de uma pessoa com Zika para os seus parceiros ou parceiras, mesmo que a pessoa infectada não apresente os sintomas da doença.Neste trabalho, apresentamos dois modelos matemáticos para a epidemia do ZIKV usando (1) equações diferenciais ordinárias e (2) equações diferenciais ordinárias com atraso temporal, que é o tempo que os mosquitos levam para desenvolver o vírus. Fazemos uma comparação entre as duas variantes de modelagem e, para facilitar o trabalho com os modelos, fornecemos uma interface gráfica com o usuário. Simulações computacionais são realizadas para o Suriname e El Salvador, que são países propensos a desenvolver a epidemia de maneira endêmica. Para estudar a difusão espacial do ZIKV, propomos um modelo baseado em equações de advecção-difusão e criamos um esquema numérico com elementos finitos e diferenças finitas para resolvê-lo. / [en] The Zika Virus (ZIKV) is a virus transmitted by Aedes aegypti mosquitoes (same as the one transmitting dengue and chikungunya fever) and Aedes albopictus. The main way of contagion by the ZIKV is caused by the bite of a mosquito that, after feeding from someone contaminated, can transport the ZIKV throughout its life, transmitting the disease to a population that does not have the immunity. It can also be transmitted through a person s sexual relationship with ZIKV to their partners, even if the infected person does not have the symptoms of the disease. In this work, we present two mathematical models for the ZIKV epidemic by using (1) ordinary differential equations and, (2) ordinary differential equations with temporal delay, which is the time it takes mosquitoes to develop the virus. We make a comparison between the two modeling variants and, to facilitate the work with the models, we provide a graphical user interface. Computational simulations are performed for Suriname and El Salvador, which are countries that are prone to develop the epidemic in an endemic manner. In order to study the spatial diffusion of ZIKV, we propose a model based on advection-diffusion equations and create a numerical scheme with finite elements and finite differences to resolve it.
464

An Experimental Setup based on 3D Printing to test Viscoelastic Arterial Models

Dei-Awuku, Linda 08 1900 (has links)
Cardiovascular diseases (CVDs) are a leading cause of death worldwide, emphasizing the need for advanced and effective intervention and treatment measures. Hypertension, a significant risk factor for CVDs, is characterized by reduced vascular compliance in arterial vessels. There is a significant rise in interest in exploring the viscoelastic properties of arteries in the last few years, for the treatment of these diseases. This study aims to develop an experimental setup using 3D Printing Technology to test viscoelastic arterial models for the validation of a diagnostic device for cardiovascular diseases. The research investigates the selection of polymer-based materials that closely mimic the viscoelastic properties of arterial vessels. An experimental setup is designed and fabricated to perform mechanical tests on 3D-printed specimens. The study utilizes a mathematical model to describe the viscoelastic behavior of the materials. The model's predictions are validated using experimental data obtained from the mechanical tests. This study demonstrates the potential of 3D printing technology in fabricating specimens using elastic and flexible resin materials. These specimens closely replicate the mechanical properties of native arteries, offering a tangible platform for controlled mechanical testing. Stress relaxation tests on the3D printed specimens highlight the viscoelastic properties of fabricated materials, shedding light on their behavior under strain. The study goes further to model the mechanics of these materials, utilizing the Fractional Voigt model to capture the intricate balance between elastic and resistive behaviors under varying deformation levels. The results highlight the successful fitting of the Fractional Voigt model to the experimental data, confirming the viscoelastic behavior of the specimens. The obtained values of α and RMSE indicate a good representation of arterial mechanical properties within the viscoelastic arterial model, under different loading conditions. This research contributes to improving cardiovascular device validation and offers a practical and reliable alternative to invasive experiments. Future works include exploring different materials and conditions for arterial modeling and enhancing the precision and scope of the viscoelastic model. Overall, this study advances the understanding of cardiovascular biomechanics, contributing to the development of more effective diagnostic devices for cardiovascular diseases.
465

Modelling and Simulation of Complete Wheel Loader in Modelica : Evaluation using Modelon Impact software / Modellering och simulering av en komplett hjullastare i Modelica : Utvärdering med hjälp av programvaran Modelon Impact

Teta, Paolo January 2022 (has links)
Modelling and simulation of complex and multi-domain mechanical systems has become of major importance in the last few years to address energy and fuel consumption performance evaluation. The goal is to unify the available modelling languages aiming to improve scalability and easiness of handling complex multi-domain models. Modelica Modelling Language was born in 1997. It has three main features: object-oriented, equation based with non-causal design structure and multi-domain environment. This thesis aims to give an overview of using Modelica on Modelon Impact software to model and simulate a complete 3D wheel loader dynamic system. The project wants to show how the model has been developed focusing on each sub-system implementation. The 3D wheel loader model is designed following the top-down and bottom-up design approaches and focusing on the powertrain sub-system with the engine, transmission and driveline blocks. The combination of the two logics is used to smooth the modelling path and exploit all the benefits. For the simulation experiments, test rig models are implemented to verify the dynamics of individual sub-systems. The model is simulated giving a set of input signals and solving the dynamic equations using different numerical solvers and comparing the elapsed simulation time. The simulation results show that the Radau5ODE explicit solver achieves faster simulation with stable solution given by the variable step size parameter. However, more studies and specific background are needed to update the complexity of the model and compare it with the already existing one. / Modellering och simulering av komplexa mekaniska system med flera domäner har fått stor betydelse under de senaste åren för att utvärdera energi och bränsleförbrukning. Målet är att förena de tillgängliga modelleringsspråken för att förbättra skalbarheten och underlätta hanteringen av komplexa modeller med flera områden. Modelica-modelleringsspråket föddes 1997. Det har tre huvudfunktioner: objektorienterat, ekvationsbaserat med icke-kausal designstruktur och en miljö med flera områden. Syftet med denna avhandling är att ge en översikt över användningen av Modelica i programvaran Modelon Impact för att modellera och simulera ett komplett dynamiskt 3D-system för hjullastare. Projektet vill visa hur modellen har utvecklats med fokus på varje delsystems genomförande. 3D-modellen för hjullastaren har utformats enligt top-down och bottom-up principerna och fokuserar på delsystemet drivlina med motor, transmission och drivlina. Kombinationen av de två logikerna används för att jämna ut modelleringsvägen och utnyttja alla fördelar. För simuleringsförsöken har testriggmodeller införts för att kontrollera dynamiken hos enskilda delsystem. Modellen simuleras med en uppsättning insignaler och de dynamiska ekvationerna löses med hjälp av olika numeriska lösare, varefter den förflutna simuleringstiden jämförs. Simuleringsresultaten visar att den explicita lösaren Radau5ODE ger en snabbare simulering med en stabil lösning som ges av parametern variabel stegstorlek. Det behövs dock fler studier och mer specifik bakgrund för att uppdatera modellens komplexitet och jämföra den med redan existerande modeller.
466

Unga immigranters upplevelser av inkludering/exkludering i svensk gymnasieskola / Young immigrants' experiences of inclusion/exclusion in Swedish upper secondary school

Khoshaba, Mari, Pezo, Enna January 2023 (has links)
Tidigare undersökningar påvisade i sina resultat att immigranter upplever någon form av social inkludering i introduktionsklass, och social exkludering i ordinarieklass. Mot bakgrund i tidigare forskning visades det pågå en samhällsdiskussion om detta ämne, vilket är inspirationen till denna studies forskningsområde. Studiens frågeställningar är formulerade mot bakgrund i att undersöka hur unga immigranter upplever inkludering och exkludering i svensk gymnasieskola i relation till klasskamrater och lärare. En kvalitativ ansats är tillämpad, som använder sig av 10 semistrukturerade intervjuer med tio respondenter, där samtliga respondenter har varit bosatta i Sverige mellan 5-10 år. De teorier och begrepp som stödjer denna studie använder sig av bland annat social identitetsteori av Tajfel och ackulturationsprocess av Berry. De huvudsakliga uppgifterna från respondenterna indikerade att ett positivt bemötande från klasskamraterna, med hjälpsamma och accepterande lärare, bidrog till upplevd inkludering i både SVA-klassen och i ordinarieklassen. De främsta orsakerna till att respondenterna upplevde exkludering i skolan i relation till klasskamrater och lärare grundade sig i otrevligt bemötande med spår av mobbning, samt fördomar som grundade sig i diskriminerande attityder. Vad som framgick av resultatet visade sig att maximal inkludering kunde upplevas i både SVA- och ordinarieklass, men det var ett mindre antal respondenter som upplevde det. / Previous surveys showed in their results that immigrants experience some form of social inclusion in introductory class, and social exclusion in ordinary class. Against the background of previous research, it was shown that there was an ongoing societal discussion on this topic,which is the inspiration for this study's research area.The study's questions are formulated against the background of investigating how young immigrants experience inclusion and exclusion in Swedish high school in relation to classmates and teachers. A qualitative approach is applied, which uses 10 semi-structured interviews with ten respondents, where all respondents have lived in Sweden between 5-10 years. The theories and concepts that support this study use, among others, social identity theory by Tajfel and acculturation process by Berry. The main data from the respondents indicated that a positive attitude from classmates, with helpful and accepting teachers, contributed to perceived inclusion in both the SVA class and in the regular class. The main reasons why the respondents experienced exclusion at school in relation to classmates and teachers were based on unpleasant treatment with traces of bullying, as well as prejudices based on discriminatory attitudes. What emerged from the results was that maximum inclusion could be experienced in both SVA and ordinary classes, but it was a smaller number of respondents who experienced it.
467

Le lasso linéaire : une méthode pour des données de petites et grandes dimensions en régression linéaire

Watts, Yan 04 1900 (has links)
Dans ce mémoire, nous nous intéressons à une façon géométrique de voir la méthode du Lasso en régression linéaire. Le Lasso est une méthode qui, de façon simultanée, estime les coefficients associés aux prédicteurs et sélectionne les prédicteurs importants pour expliquer la variable réponse. Les coefficients sont calculés à l’aide d’algorithmes computationnels. Malgré ses vertus, la méthode du Lasso est forcée de sélectionner au maximum n variables lorsque nous nous situons en grande dimension (p > n). De plus, dans un groupe de variables corrélées, le Lasso sélectionne une variable “au hasard”, sans se soucier du choix de la variable. Pour adresser ces deux problèmes, nous allons nous tourner vers le Lasso Linéaire. Le vecteur réponse est alors vu comme le point focal de l’espace et tous les autres vecteurs de variables explicatives gravitent autour du vecteur réponse. Les angles formés entre le vecteur réponse et les variables explicatives sont supposés fixes et nous serviront de base pour construire la méthode. L’information contenue dans les variables explicatives est projetée sur le vecteur réponse. La théorie sur les modèles linéaires normaux nous permet d’utiliser les moindres carrés ordinaires (MCO) pour les coefficients du Lasso Linéaire. Le Lasso Linéaire (LL) s’effectue en deux étapes. Dans un premier temps, des variables sont écartées du modèle basé sur leur corrélation avec la variable réponse; le nombre de variables écartées (ou ordonnées) lors de cette étape dépend d’un paramètre d’ajustement γ. Par la suite, un critère d’exclusion basé sur la variance de la distribution de la variable réponse est introduit pour retirer (ou ordonner) les variables restantes. Une validation croisée répétée nous guide dans le choix du modèle final. Des simulations sont présentées pour étudier l’algorithme en fonction de différentes valeurs du paramètre d’ajustement γ. Des comparaisons sont effectuées entre le Lasso Linéaire et des méthodes compétitrices en petites dimensions (Ridge, Lasso, SCAD, etc.). Des améliorations dans l’implémentation de la méthode sont suggérées, par exemple l’utilisation de la règle du 1se nous permettant d’obtenir des modèles plus parcimonieux. Une implémentation de l’algorithme LL est fournie dans la fonction R intitulée linlasso, disponible au https://github.com/yanwatts/linlasso. / In this thesis, we are interested in a geometric way of looking at the Lasso method in the context of linear regression. The Lasso is a method that simultaneously estimates the coefficients associated with the predictors and selects the important predictors to explain the response variable. The coefficients are calculated using computational algorithms. Despite its virtues, the Lasso method is forced to select at most n variables when we are in highdimensional contexts (p > n). Moreover, in a group of correlated variables, the Lasso selects a variable “at random”, without caring about the choice of the variable. To address these two problems, we turn to the Linear Lasso. The response vector is then seen as the focal point of the space and all other explanatory variables vectors orbit around the response vector. The angles formed between the response vector and the explanatory variables are assumed to be fixed, and will be used as a basis for constructing the method. The information contained in the explanatory variables is projected onto the response vector. The theory of normal linear models allows us to use ordinary least squares (OLS) for the coefficients of the Linear Lasso. The Linear Lasso (LL) is performed in two steps. First, variables are dropped from the model based on their correlation with the response variable; the number of variables dropped (or ordered) in this step depends on a tuning parameter γ. Then, an exclusion criterion based on the variance of the distribution of the response variable is introduced to remove (or order) the remaining variables. A repeated cross-validation guides us in the choice of the final model. Simulations are presented to study the algorithm for different values of the tuning parameter γ. Comparisons are made between the Linear Lasso and competing methods in small dimensions (Ridge, Lasso, SCAD, etc.). Improvements in the implementation of the method are suggested, for example the use of the 1se rule allowing us to obtain more parsimonious models. An implementation of the LL algorithm is provided in the function R entitled linlasso available at https://github.com/yanwatts/linlasso.
468

Novel neural architectures & algorithms for efficient inference

Kag, Anil 30 August 2023 (has links)
In the last decade, the machine learning universe embraced deep neural networks (DNNs) wholeheartedly with the advent of neural architectures such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, etc. These models have empowered many applications, such as ChatGPT, Imagen, etc., and have achieved state-of-the-art (SOTA) performance on many vision, speech, and language modeling tasks. However, SOTA performance comes with various issues, such as large model size, compute-intensive training, increased inference latency, higher working memory, etc. This thesis aims at improving the resource efficiency of neural architectures, i.e., significantly reducing the computational, storage, and energy consumption of a DNN without any significant loss in performance. Towards this goal, we explore novel neural architectures as well as training algorithms that allow low-capacity models to achieve near SOTA performance. We divide this thesis into two dimensions: \textit{Efficient Low Complexity Models}, and \textit{Input Hardness Adaptive Models}. Along the first dimension, i.e., \textit{Efficient Low Complexity Models}, we improve DNN performance by addressing instabilities in the existing architectures and training methods. We propose novel neural architectures inspired by ordinary differential equations (ODEs) to reinforce input signals and attend to salient feature regions. In addition, we show that carefully designed training schemes improve the performance of existing neural networks. We divide this exploration into two parts: \textsc{(a) Efficient Low Complexity RNNs.} We improve RNN resource efficiency by addressing poor gradients, noise amplifications, and BPTT training issues. First, we improve RNNs by solving ODEs that eliminate vanishing and exploding gradients during the training. To do so, we present Incremental Recurrent Neural Networks (iRNNs) that keep track of increments in the equilibrium surface. Next, we propose Time Adaptive RNNs that mitigate the noise propagation issue in RNNs by modulating the time constants in the ODE-based transition function. We empirically demonstrate the superiority of ODE-based neural architectures over existing RNNs. Finally, we propose Forward Propagation Through Time (FPTT) algorithm for training RNNs. We show that FPTT yields significant gains compared to the more conventional Backward Propagation Through Time (BPTT) scheme. \textsc{(b) Efficient Low Complexity CNNs.} Next, we improve CNN architectures by reducing their resource usage. They require greater depth to generate high-level features, resulting in computationally expensive models. We design a novel residual block, the Global layer, that constrains the input and output features by approximately solving partial differential equations (PDEs). It yields better receptive fields than traditional convolutional blocks and thus results in shallower networks. Further, we reduce the model footprint by enforcing a novel inductive bias that formulates the output of a residual block as a spatial interpolation between high-compute anchor pixels and low-compute cheaper pixels. This results in spatially interpolated convolutional blocks (SI-CNNs) that have better compute and performance trade-offs. Finally, we propose an algorithm that enforces various distributional constraints during training in order to achieve better generalization. We refer to this scheme as distributionally constrained learning (DCL). In the second dimension, i.e., \textit{Input Hardness Adaptive Models}, we introduce the notion of the hardness of any input relative to any architecture. In the first dimension, a neural network allocates the same resources, such as compute, storage, and working memory, for all the inputs. It inherently assumes that all examples are equally hard for a model. In this dimension, we challenge this assumption using input hardness as our reasoning that some inputs are relatively easy for a network to predict compared to others. Input hardness enables us to create selective classifiers wherein a low-capacity network handles simple inputs while abstaining from a prediction on the complex inputs. Next, we create hybrid models that route the hard inputs from the low-capacity abstaining network to a high-capacity expert model. We design various architectures that adhere to this hybrid inference style. Further, input hardness enables us to selectively distill the knowledge of a high-capacity model into a low-capacity model by cleverly discarding hard inputs during the distillation procedure. Finally, we conclude this thesis by sketching out various interesting future research directions that emerge as an extension of different ideas explored in this work.
469

#malmö och stadens representation : En studie om vardagskultur i Malmö, grundad i Instagram / #malmö and City Representation : An study of ordinary culture in Malmö, grounded in Instagram

Svensson, Kimmo January 2023 (has links)
The study examined the most common visual representations of the city of Malmö on Instagrams #malmö hashtag over four days. The images were analyzed from a contextual perspective and a constructivist approach.  Results from the study were utilized in the related media production; a series of illustrated and animated Instagram advertisements for Malmös tourist office.
470

Hopf Bifurcation Analysis of Chaotic Chemical Reactor Model

Mandragona, Daniel 01 January 2018 (has links)
Bifurcations in Huang's chaotic chemical reactor system leading from simple dynamics into chaotic regimes are considered. Following the linear stability analysis, the periodic orbit resulting from a Hopf bifurcation of any of the six fixed points is constructed analytically by the method of multiple scales across successively slower time scales, and its stability is then determined by the resulting final secularity condition. Furthermore, we run numerical simulations of our chemical reactor at a particular fixed point of interest, alongside a set of parameter values that forces our system to undergo Hopf bifurcation. These numerical simulations then verify our analysis of the normal form.

Page generated in 0.1029 seconds