• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16781
  • 7438
  • 6566
  • 11
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 31006
  • 31006
  • 16180
  • 16180
  • 13756
  • 13756
  • 13756
  • 3577
  • 3416
  • 3387
  • 3215
  • 3215
  • 3215
  • 2663
  • 2646
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Optimal Design of Neuro-Mechanical Networks

Thore, Carl-Johan January 2012 (has links)
Many biological and artificial systems are made up from similar, relatively simple elements that interact directly with their nearest neighbors. Despite the simplicity of the individual building blocks, systems of this type, network systems, often display complex behavior — an observation which has inspired disciplines such as artificial neural networks and modular robotics. Network systems have several attractive properties, including distributed functionality, which enables robustness, and the possibility to use the same elements in different configurations. The uniformity of the elements should also facilitate development of efficient methods for system design, or even self-reconfiguration. These properties make it interesting to investigate the idea of constructing mechatronic systems based on networks of simple elements. This thesis concerns modeling and optimal design of a class of active mechanical network systems referred to as Neuro-Mechanical Networks (NMNs). To make matters concrete, a mathematical model that describes an actuated truss with an artificial recurrent neural network superimposed onto it is developed and used. A typical NMN is likely to consist of a substantial number of elements, making design of NMNs for various tasks a complex undertaking. For this reason, the use of numerical optimization methods in the design process is advocated. Application of such methods is exemplified in four appended papers that describe optimal design of NMNs which should take on static configurations or follow time-varying trajectories given certain input stimuli. The considered optimization problems are nonlinear, non-convex, and potentially large-scale, but numerical results indicate that useful designs can be obtained in practice. The last paper in the thesis deals with a solution method for optimization problems with matrix inequality constraints. The method described was developed primarily for solving optimization problems stated in some of the other appended papers, but is also applicable to other problems in control theory and structural optimization.
22

Composite Structure Optimization using a Homogenized Material Approach

Hozić, Dženan January 2014 (has links)
The increasing use of bre-reinforced composite materials in the manufacturing of high performance structures is primarily driven by their superior strength-toweight ratio when compared to traditional metallic alloys. This provides the ability to design and manufacture lighter structures with improved mechanical properties. However, the specic manufacturing process of composite structures, along with the orthotropic material properties exhibited by bre-reinforced composite materials, result in a complex structural design process where a number of dierent design parameters and manufacturing issues, which aect the mechanical properties of the composite structure, have to be considered. An ecient way to do this is to implement structural optimization techniques in the structural design process thus improving the ability of the design process to nd design solutions which satisfy the structural requirements imposed on the composite structure. This thesis describes a two phase composite structure optimization method based on a novel material homogenization approach. The proposed method consists of a stiness optimization problem and a lay-up optimization problem, respectively, with the aim to obtain a manufacturable composite structure with maximized stiness properties. The homogenization material approach is applied in both optimization problems, such that the material properties of the composite structure are homogenized. In the proposed method the stiness optimization problem provides a composite structure with maximized stiness properties by nding the optimal distribution of composite material across the design domain. The aim of the lay-up optimization problem is to obtain a manufacturable lay-up sequence of bre-reinforced composite plies for the composite structure which, as far as possible, retains the stiness properties given by the stiness optimization problem. The ability of the composite structure optimization method to obtain manufacturable composite structures is tested and conrmed by a number of numerical tests.
23

System Dynamics Statistics (SDS) : A Statistical Tool for Stochastic System Dynamics Modeling and Simulation

Gustafsson, Erik January 2017 (has links)
This thesis is about the creation of a tool (SDS) for statistical analysis of stochasticSystem Dynamics models. System Dynamics is a specific field of simulation models based on a system of ordinary differential equations and algebraic equations.The tool is intended for analyzing stochastic System Dynamics models in various fields including biology, ecology, agriculture, economy, epidemiology, military strategy, physics, chemistry and many other fields. In particular, this project was initiated tofulfill the needs of a joint epidemiological project at Uppsala University (UU) andKarolinska Institute (KI). It is also intended to be used in basic courses in simulation at KI and the Swedish University of Agricultural Sciences (SLU).A stochastic model has to be run a large number of times to reveal its behavior. The SDS performs the analysis in the following way. First it connects to the SystemDynamics engine containing the model. Then a specified number of simulation runsare ordered. For each run the results of specified quantities are collected. From thecollected data, various statistical measures are calculated such as averages, standard deviations and confidence intervals. The statistics can then be presented graphically inform of distributions, histograms, scatter plots, and box plots. Finally, all features of SDS were thoroughly tested using manual testing. SDS wasthoroughly tested for statistical correctness, and then evaluated against some stochastic models.
24

CAN bus Data Stream Wrapper

Yang, Yang January 2017 (has links)
A data stream management system (DSMS) is similar to a database system with the difference that a DSMS can search data directly in on-line streams as well as querying stored data, while a DBMS can search only stored data. Stream queries are called continuous queries because they run all the time until they are terminated. SCSQ is an extensible DSMS allowing different kinds of data sources to be integrated and queried. A SCSQ interface to a data stream system is called a datastream wrapper. A datastream wrapper allows continuous queries to be specified over an external datastream producing system. The Controller Area Network bus (CAN bus), is astandard for interfacing data streams from different kinds of equipment and engines,such as wheel loaders and other vehicles. The objective of the project is to developan interface, called a CAN bus data stream wrapper, to enable SCSQ to access to streams of sensor readings from industrial equipment through CAN bus standard interfaces. It enables the SCSQ user to specify continuous queries over equipmentdata streams.
25

Convolutional neural networksfor classification of transmissionelectron microscopy imagery

Gryshkevych, Sergii January 2017 (has links)
One of Vironova's electron microscopy services is to classify liposomes. This includes determining the structure of a liposome and presence of a liposomal encapsulation. A typical service analysis contains a lot of electron microscopy images, so automatic classification is of great interest. The purpose of this project is to evaluate convolutional neural networks for solving lamellarity and encapsulation classification problems. The available data sets are imbalanced so a number of techniques toovercome this problem are studied. The convolutional neural network models have reasonable performance and offer great flexibility, so they can be an alternative to the support vector machines method which is currently used to perform automatic classification tasks. The project also includes the feasibility study of convolutional neural networks from Vironova's perspective.
26

Parallel Bayesian Additive Regression Trees, using Apache Spark

Geirsson, Sigurdur January 2017 (has links)
New methods have been developed to find patterns and trends in order to gainknowledge from large datasets in various disciplines, such as bioinformatics, consumer behavior in advertising and weather forecasting.The goal of many of these new methods is to construct prediction models from the data. Linear regression, which is widely used for analyzing data, is very powerful ford etecting simple patterns, but higher complexity requires a more sophisticated solution. Regression trees split up the problem into numerous parts but they do not generalizewell as they tend to have high variance. Ensemble methods, a collection of regressiontrees, solves that problem by spreading the model over numerous trees. Ensemble methods such as Random Forest, Gradient Boosted Trees and Bayesian Additive Regression Trees, all have different ways to constructing prediction modelfrom data. Using these models for large datasets are computationally demanding.The aim of this work is to explore a parallel implementation of Bayesian Additive Regression Trees (BART) using Apache Spark framework. Spark is ideal in this case asit is great for iterative and data intensive jobs.We show that our parallel implementation is about 35 times faster for a dataset of pig's genomes. Most of the speed improvement is due to serial code modification that minimizes scanning of the data.The gain from parallelization is a speedup of 2.2x, gained by using four cores on aquad core system. Measurements on a computer clusters consisting of four computers resulted in a maximum speedup of 2.1x for eight cores.We should emphasize that these gains are heavily dependent on size of datasets.
27

Modeling of drug effect in general closed-loop anesthesia

Cox, Sander January 2017 (has links)
In medicine, anesthesia is achieved by administering two interacting drugs. Nowadays,the Depth of Anesthesia can be expressed by the Bispectral Index Scale, which ismeasured by an EEG. In order to make automatic closed-loop anesthesia possiblewith the benefits of 1) relieving the anesthesiologist from the hard task of administering optimal drug doses, 2) achieving more consistent drug effects by meansof individualization, and 3) reducing side effects because of the achieved reduced overall drug administration, estimating accurate models of the effect of drug doses onthe Depth of Anesthesia is essential.The model used was a minimally parametrized PharmacoKinetic-Pharmaco Dynamic Wiener model. The parameters of the model were estimated using an Extended Kalman Filter, whose parameters were tuned manually. The model and filter were tested on new data from both the University of Porto and the University of Brescia.The unit of the reference data set from Porto was unknown, so in order to use the scale-dependent model an educated guess was made to convert the other data sets to a reasonable scale. Furthermore, the data from Brescia was incomplete, which could only partly be remediated. Similar tracking performances were obtained when using the new data sets compared to the reference data, however, either relatively constant estimates, or different parameter estimates for similar conditions, were typically obtained. This questions the validity of the model used and if the parameters foundcan be trusted. Therefore, the replication of the procedure on other complete data,and the comparison with the application of other models on the studied data, is subject for future research.
28

Analysis of solutions for energy self-sufficiency of a single-family house using renewable energy

Palacio Sánchez, Pablo January 2017 (has links)
<p>ERASMUS</p>
29

Modeling and electrical characterization of Cu(In,Ga)Se2 and Cu2ZnSnS4 solar cells

Frisk, Christopher January 2017 (has links)
In this thesis, modeling and electrical characterization have been performed on Cu(In,Ga)Se2 (CIGS) and Cu2ZnSnS4 (CZTS) thin film solar cells, with the aim to investigate potential improvements to power conversion efficiency for respective technology. The modeling was primarily done in SCAPS, and current-voltage (J-V), quantum efficiency (QE) and capacitance-voltage (C-V) were the primary characterization methods. In CIGS, models of a 19.2 % efficient reference device were created by fitting simulations of J-V and QE to corresponding experimental data. Within the models, single and double GGI = Ga/(Ga+In) gradients through the absorber layer were optimized yielding up to 2 % absolute increase in efficiency, compared to the reference models. For CIGS solar cells of this performance level, electron diffusion length (Ln) is comparable to absorber thickness. Thus, increasing GGI towards the back contact acts as passivation and constitutes largest part of the efficiency increase. For further efficiency increase, majority bottlenecks to improve are optical losses and electron lifetime in the CIGS. In a CZTS model of a 6.7 % reference device, bandgap (Eg) fluctuations and interface recombination were shown to be the majority limit to open circuit voltage (Voc), and Shockley-Read-Hall (SRH) recombination limiting Ln and thus being the majority limit to short-circuit current and fill-factor. Combined, Eg fluctuations and interface recombination cause about 10 % absolute loss in efficiency, and SRH recombination about 9 % loss, compared to an ideal system. Part of the Voc-deficit originates from a cliff-type conduction band offset (CBO) between CZTS and the standard CdS buffer layer, and the energy of the dominant recombination path (EA) is around 1 eV, well below Eg for CZTS. However, it was shown that the CBO could be adjusted and improved with Zn1-xSn­xOy buffer layers. Best results gave EA = 1.36 eV, close to Eg = 1.3-1.35 eV for CZTS as given by photoluminescence, and the Voc-deficit decreased almost 100 mV. Experimentally by varying the absorber layer thickness in CZTS devices, the efficiency saturated at &lt;1 μm, due to short Ln, expected to be 250-500 nm, and narrow depletion width, commonly of the order 100 nm in in-house CZTS. Doping concentration (NA) determines depletion width, but is critical to device performance in general. To better estimate NA with C-V, ZnS and CZTS sandwich structures were created, and in conjunction with simulations it was seen that the capacitance extracted from CZTS is heavily frequency dependent. Moreover, it was shown that C-V characterization of full solar cells may underestimate NA greatly, meaning that the simple sandwich structure might be preferable in this type of analysis. Finally, a model of the Cu2ZnSn(S,Se)4 was created to study the effect of S/(S+Se) gradients, in a similar manner to the GGI gradients in CIGS. With lower Eg and higher mobility for pure selenides, compared to pure sulfides, it was seen that increasing S/(S+Se) towards the back contact improves efficiency with about 1 % absolute, compared to the best ungraded model where S/(S+Se) = 0.25. Minimizing Eg fluctuation in CZTS in conjunction with suitable buffer layers, and improving Ln in all sulfo-selenides, are needed to bring these technologies into the commercial realm.
30

Chatbots As A Mean To Motivate Behavior Change : How To Inspire Pro-Environmental Attitude with Chatbot Interfaces

Åberg, Jakob January 2017 (has links)
With an expanding access of decision supporting technologies and a growing demand for lowered carbon dioxide emissions, sustainable development with the help of modern interfaces has become a subject for discussion. There are different opinions on how to motivate users to live more pro-environmentally and to lower their carbon dioxide emissions with modern technology. This paper analyses the use of chatbots as a mean to motivate people to live more sustainable lives.  To evaluate the field, a literature study was conducted covering eco-feedback technology, recommender systems, conversational user interfaces, and motivation for pro-environmental behavior. The effect of motivational factors from behavioral psychology were tested, and their impact on peoples food consumption habits. The findings of this paper were based on three chatbot prototypes; one that is built on the motivational factor of information, a second one that is implemented on the motivational factor of goal-setting, and a third one that follows the motivational factor of comparison. Twenty-seven persons participated in the study, seven people at the early stages of the project, and twenty people that used the chatbots. The user experience of the chatbots was evaluated, resulting in guidelines on how to design for chatbot interfaces and behavior change. The result from the user interviews indicates that chatbots can affect and motivate people to consume food in a more sustainable way.

Page generated in 0.1381 seconds