• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 878
  • 878
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Robust design : Accounting for uncertainties in engineering

Lönn, David January 2008 (has links)
This thesis concerns optimization of structures considering various uncertainties. The overall objective is to find methods to create solutions that are optimal both in the sense of handling the typical load case and minimising the variability of the response, i.e. robust optimal designs. Traditionally optimized structures may show a tendency of being sensitive to small perturbations in the design or loading conditions, which of course are inevitable. To create robust designs, it is necessary to account for all conceivable variations (or at least the influencing ones) in the design process. The thesis is divided in two parts. The first part serves as a theoretical background to the second part, the two appended articles. This first part includes the concept of robust design, basic statistics, optimization theory and meta modelling. The first appended paper is an application of existing methods on a large industrial example problem. A sensitivity analysis is performed on a Scania truck cab subjected to impact loading in order to identify the most influencing variables on the crash responses. The second paper presents a new method that may be used in robust optimizations, that is, optimizations that account for variations and uncertainties. The method is demonstrated on both an analytical example and a Finite Element example of an aluminium extrusion subjected to axial crushing. / ROBDES
122

Computational Journalism: from Answering Question to Questioning Answers and Raising Good Questions

Wu, You January 2015 (has links)
<p>Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.</p> / Dissertation
123

The determinants of voter turnout in OECD : An aggregated cross-national study using panel data

Olsén Ingefeldt, Niclas January 2016 (has links)
This paper examines in a descriptive manner how two groups of variables, institutional and socio-economic, correlate with voter turnout respectively and if their magnitude have changed over time in OECD countries. Previous research is often based on data from the 70’s and 80’s. Since then, voter turnout in democratic countries has decreased and more citizens do not use their fundamental democratic right of being involved in the process of choosing their representatives. To answer the paper hypotheses i.e. analyzing what factors that correlates with voter turnout, panel data between 1980 and 2012 are used which is estimated by an OLS approach. The outcome of the empirical estimations indicates that 13 out of 19 variables have a significant relationship with turnout. Most of the variables magnitudes are a bit lower than previous literature. From the time sensitivity analysis the result indicates that voters are less influenced by the significant variables that focus on the voting cost. It seems that voters in the 21st century meet voting costs in different manner than previously.
124

Analys av solelinstallationer på olika fastighetstyper : En studie om möjligheter och hinder

Nilsson, Sanna January 2016 (has links)
Generering av el med hjälp av solen står idag för en väldigt liten andel av Sveriges totala genereringav el. För att komma upp till en högre nivå krävs inte enorma solcellsparker över hela Sverige, utan tvärtom skulle det kunna gå att använda redan befintliga tak. Det förhållandet är bara ett av flera som påvisar behovet av större kunskapsspridning om dagens energisystem och solcellsteknik. Det finns också ett behov av ökad kunskap om karaktäristiska svenska hus- och taktyper och om vad som är möjligt att göra med varierande slag av solcellsteknik, för att konvertera de olika hustypernas tak till en liten energikälla. Ren solenergi kan inte ensam konkurrera ut fossila bränslen, men det finns god potential för en mycket högre andel av den än vad som är installerat idag. Wallenstam AB är ett energimedvetet fastighetsbolag som bland annat är verksamma i Göteborg. Deras intresse av förnybar energi och av att ersätta fossilt bränsle med mer miljövänlig teknik gav upphov till ett samarbete mellan författaren och Wallenstam AB. Arbetets första del syftar till att inbringa mer och fördjupad kunskap inom ämnet solceller och solcellsanläggningar. Syftet med arbetets andra del är att undersöka vilka möjligheter och hinder det finns med solcellsinstallationer på några olika typiska svenska fastighetstyper. En projektering genomförs på tre olika fastighetstyper i Göteborgsområdet; en industri/kontorslokal, ett modernt flerbostadshus och en fastighet i centrala Göteborg med både bostäder och kommersiell verksamhet. Målet är att utifrån varje fastighets förutsättningar finna den eller de mest lämpade solcellspanelerna för respektive fastighet med avseende på ekonomisk fördelaktighet, estetik samt i vad som är tekniskt möjligt utifrån exempelvis tidigare installerade energisystem. Rapporten och dess resultat är skapat utifrån litteraturstudier, platsbesök på fastigheter, mätningar på ritningar och satellitkartor, beräkningar för hand, samt modellering i PVsyst. Fortlöpande diskussion med personer inom solenergibranschen har också hållits. För att beräkna respektive anläggnings elproduktionskostnad samt för känslighetsanalyser används en webbaserad beräkningsapplikation, tillhörande rapporten El från nya och framtida anläggningar 2014. Huvudresultat är att endast två av de tekniskt möjliga anläggningarna har en elproduktionskostnad som är i närheten av det jämförande elpriset på 0,75 kr/kWh. Det är de polykristallina solcellsanläggningarna på Kvillebäcken 3:1 och Mölnlycke 1:1. Beräkningarna visar på att anläggningarna med polykristallina solcellspaneler på Kvillebäcken 3:1 och Mölnlycke Fabriker 1:1 är riktigt konkurrenskraftiga om kalkylräntan sänks från 4 % till 2 %, eller om investeringskostnaderna minskar med 15 %. På fastigheten Inom Vallgraven 26:8 ligger samtliga elproduktionskostnader över en krona per kilowattimme, men installation av solcellsteknik på en sådan central byggnad skulle kunna ha ett marknadsföringsvärde. / Generation of electricity by use of solar irradiation is today a very small part of the total electricity generation in Sweden. It is not necessary to build a great amount of solar parks all over Sweden to reach a higher level, it could instead be possible by using already existing rooftops. That is one of the situations that indicates the need of a greater knowledge dissemination, about today’s energysystem and the technology of photovoltaics. It also exists a demand of knowledge about characteristic Swedish houses and rooftops. To convert the different kinds of rooftops to small powersources, there is also a demand of knowledge about installations that is possible to do with different kinds of photovoltaics. Although, solar power alone cannot compete with fossil fuels, but it should have a good possibility to reach a much higher level than what exists today. Wallenstam AB is anenergy-conscious company within the real estate business in Gothenburg. Their interests in renewable energy and their ambition to replace fossil fuel with more environment-friendly technology facilitated this cooperation.The first part of the report aims to get more and deeper knowledge about the subject photovoltaics and solar plants. The second part of the report aims to investigate what possibilities and impediments there is with photovoltaic installations at different kinds of typical Swedish houses. A planning work is made at three different types of buildings in the area of Gothenburg. One at an industry/office space, one at a modern apartment block and one at a central building in Gothenburg that has both apartments and commercialized activity. The goal is to find the most suitable photovoltaic installation to each of the three buildings, based on economic advantageousness, appearance and esthetic, and the possibilities with the technology of earlier installed energy systems.The report and the result are formed on the basis of literature studies, site visits at the buildings, measurements at drawings and satellite maps, calculations by hand and modelling in the software PVsyst. Many discussions with people in the solar energy industry were also held. To calculate the Levelized Cost Of Energy (LCOE) and to perform the sensitivity analysis, a web-based calculator was used. The web-based calculator belongs to the report El från nya och framtida anläggningar 2014. The main result shows that only two of the possible photovoltaic installations gets a LCOE that is competitive relative the comparative electricity price at 0,75 SEK/kWh. Installations with polycrystalline solar panels at Kvillebäcken 3:1 and Mölnlycke 1:1. The result also shows that the two installations with polycrystalline solar panels at Kvillebäcken 3:1 and Mölnlycke Fabriker is really competitive if the capital interest rate is reduced from 4 % to 2 %, or if the investment cost isreduced by 15 %. At the property Inom Vallgraven 26:8, the LCOE for all possible installations are over one Swedish krona per kilowatt hour, but an installation with solar panels at a central building like that could have a marketing value. It would also show Wallenstams standpoint for renewable energy, new technology and a sustainable energy system.
125

Statistical Approaches for Handling Missing Data in Cluster Randomized Trials

Fiero, Mallorie H. January 2016 (has links)
In cluster randomized trials (CRTs), groups of participants are randomized as opposed to individual participants. This design is often chosen to minimize treatment arm contamination or to enhance compliance among participants. In CRTs, we cannot assume independence among individuals within the same cluster because of their similarity, which leads to decreased statistical power compared to individually randomized trials. The intracluster correlation coefficient (ICC) is crucial in the design and analysis of CRTs, and measures the proportion of total variance due to clustering. Missing data is a common problem in CRTs and should be accommodated with appropriate statistical techniques because they can compromise the advantages created by randomization and are a potential source of bias. In three papers, I investigate statistical approaches for handling missing data in CRTs. In the first paper, I carry out a systematic review evaluating current practice of handling missing data in CRTs. The results show high rates of missing data in the majority of CRTs, yet handling of missing data remains suboptimal. Fourteen (16%) of the 86 reviewed trials reported carrying out a sensitivity analysis for missing data. Despite suggestions to weaken the missing data assumption from the primary analysis, only five of the trials weakened the assumption. None of the trials reported using missing not at random (MNAR) models. Due to the low proportion of CRTs reporting an appropriate sensitivity analysis for missing data, the second paper aims to facilitate performing a sensitivity analysis for missing data in CRTs by extending the pattern mixture approach for missing clustered data under the MNAR assumption. I implement multilevel multiple imputation (MI) in order to account for the hierarchical structure found in CRTs, and multiply imputed values by a sensitivity parameter, k, to examine parameters of interest under different missing data assumptions. The simulation results show that estimates of parameters of interest in CRTs can vary widely under different missing data assumptions. A high proportion of missing data can occur among CRTs because missing data can be found at the individual level as well as the cluster level. In the third paper, I use a simulation study to compare missing data strategies to handle missing cluster level covariates, including the linear mixed effects model, single imputation, single level MI ignoring clustering, MI incorporating clusters as fixed effects, and MI at the cluster level using aggregated data. The results show that when the ICC is small (ICC ≤ 0.1) and the proportion of missing data is low (≤ 25\%), the mixed model generates unbiased estimates of regression coefficients and ICC. When the ICC is higher (ICC > 0.1), MI at the cluster level using aggregated data performs well for missing cluster level covariates, though caution should be taken if the percentage of missing data is high.
126

A generic predictive information system for resource planning and optimisation

Tavakoli, Siamak January 2010 (has links)
The purpose of this research work is to demonstrate the feasibility of creating a quick response decision platform for middle management in industry. It utilises the strengths of current, but more importantly creates a leap forward in the theory and practice of Supervisory and Data Acquisition (SCADA) systems and Discrete Event Simulation and Modelling (DESM). The proposed research platform uses real-time data and creates an automatic platform for real-time and predictive system analysis, giving current and ahead of time information on the performance of the system in an efficient manner. Data acquisition as the backend connection of data integration system to the shop floor faces both hardware and software challenges for coping with large scale real-time data collection. Limited scope of SCADA systems does not make them suitable candidates for this. Cost effectiveness, complexity, and efficiency-orientation of proprietary solutions leave space for more challenge. A Flexible Data Input Layer Architecture (FDILA) is proposed to address generic data integration platform so a multitude of data sources can be connected to the data processing unit. The efficiency of the proposed integration architecture lies in decentralising and distributing services between different layers. A novel Sensitivity Analysis (SA) method called EvenTracker is proposed as an effective tool to measure the importance and priority of inputs to the system. The EvenTracker method is introduced to deal with the complexity systems in real-time. The approach takes advantage of event-based definition of data involved in process flow. The underpinning logic behind EvenTracker SA method is capturing the cause-effect relationships between triggers (input variables) and events (output variables) at a specified period of time determined by an expert. The approach does not require estimating data distribution of any kind. Neither the performance model requires execution beyond the real-time. The proposed EvenTracker sensitivity analysis method has the lowest computational complexity compared with other popular sensitivity analysis methods. For proof of concept, a three tier data integration system was designed and developed by using National Instruments’ LabVIEW programming language, Rockwell Automation’s Arena simulation and modelling software, and OPC data communication software. A laboratory-based conveyor system with 29 sensors was installed to simulate a typical shop floor production line. In addition, EvenTracker SA method has been implemented on the data extracted from 28 sensors of one manufacturing line in a real factory. The experiment has resulted 14% of the input variables to be unimportant for evaluation of model outputs. The method proved a time efficiency gain of 52% on the analysis of filtered system when unimportant input variables were not sampled anymore. The EvenTracker SA method compared to Entropy-based SA technique, as the only other method that can be used for real-time purposes, is quicker, more accurate and less computationally burdensome. Additionally, theoretic estimation of computational complexity of SA methods based on both structural complexity and energy-time analysis resulted in favour of the efficiency of the proposed EvenTracker SA method. Both laboratory and factory-based experiments demonstrated flexibility and efficiency of the proposed solution.
127

Känslighets- och osäkerhetsanalys av parametrar och indata i dagvatten- och recipientmodellen StormTac

Stenvall, Brita January 2004 (has links)
<p>Three methods of sensitivity and unceartainty analysis have been applied to the operative stormwater- and recipient model StormTac. The study area is the watershed of lake Flaten in the municipality Salem. StormTac’s submodels for stormwater, pollutant transport and the recipient are cosidired. In the sensitivity assessment, the model parametres and inputs were varied one at a time by a constant percentage according to the “one at a time” (OAAT) method and the response of the outputs were calculated. It was found that the stormwater- and baseflow were most sensitive to perturbations in the perciptation. Unceartainty analysis using Monte Carlo simulation was performed in two different ways. (1) All model parametres and inputs were included with defined unceartainties and the resulting unceartainty for the target variable was quantified. Thereafter, whith the purpose to estimate the contribution of all the parametres and inputs, the cumulative uncertainty for the target variable, each parameters/inputs unceartainty was omitted one at the time. The most crucial uncertainty for the storm water flow was the runoff coefficient for forestland and the perciptation (i.e the differens between the 90- and 10-percentile for the storm water flow was reduced whith 44 % and 33 % respectively). (2) To identify optimal parameter intervals, the probability for an acceptable value of the target variable was plotted against each parameters value range. The result suggests that for some of the parametres i StormTac, the ranges should be changed.</p> / <p>Den operativa dagvatten- och recipientmodellen StormTac har applicerats på sjön Flatens avrinningsområde i Salems kommun. StormTac:s delmodeller för dagvatten, föroreningstransport och recipienten studerades. Tre olika metoder för att undersöka osäkerheten och känsligheten hos parametrar och indata i delmodellerna tillämpades. I känslighetsanalysen (OAAT-metoden) behäftades parametervärdena och indata med systematiska fel och responsen hos utdata beräknades. Dag- och basvattenflödet var känsligast mot fel i nederbördsdata, medan kväve-, fosfor- och kopparbelastningen till recipienten var känsligast mot respektive förorenings dagvattenkoncentration från områden med bebyggelse. Varje parameter och indatas bidrag till den kumulativa osäkerheten hos utdata uppskattades med hjälp av Montecarlosimulering. Genom att för varje effektvariabel studera differensen mellan 90- och 10-percentilen när osäkerheten hos en parameter/indata i taget utelämnades, kunde varje parameters/indatas bidrag till modellresultatets osäkerhet kvantifieras. För dagvattenflödet bidrog avrinningskoefficienten för skogmark med 44 % av osäkerheten och nederbörden med 33 %. Montecarloanlys praktiserades även för att identifiera optimala intervall för parametrarna i modellen. Sannolikheten för ett accepterat värde på den simulerade effektvariabeln plottades mot varje parameters värdemängd. För vissa parametrar indikerade resultatet att intervallen kan förändras mot hur de i nuläget ser ut i StormTac. Uniforma sannolikhetsfördelningar, begränsade av StormTac:s min- och maxvärden för parametrarna och ± 50% av orginalvärdet för indata, användes i båda osäkerhetsanalyserna.</p>
128

Optimal shape design based on body-fitted grid generation.

Mohebbi, Farzad January 2014 (has links)
Shape optimization is an important step in many design processes. With the growing use of Computer Aided Engineering in the design chain, it has become very important to develop robust and efficient shape optimization algorithms. The field of Computer Aided Optimal Shape Design has grown substantially over the recent past. In the early days of its development, the method based on small shape perturbation to probe the parameter space and identify an optimal shape was routinely used. This method is nothing but an educated trial and error method. A key development in the pursuit of good shape optimization algorithms has been the advent of the adjoint method to compute the shape sensitivities more formally and efficiently. While undoubtedly, very attractive, this method relies on very sophisticated and advanced mathematical tools which are an impediment to its wider use in the engineering community. It that spirit, it is the purpose of this thesis to propose a new shape optimization algorithm based on more intuitive engineering principles and numerical procedures. In this thesis, the new shape optimization procedure which is proposed is based on the generation of a body-fitted mesh. This process maps the physical domain into a regular computational domain. Based on simple arguments relating to the use of the chain rule in the mapped domain, it is shown that an explicit expression for the shape sensitivity can be derived. This enables the computation of the shape sensitivity in one single solve, a performance analogous to the adjoint method, the current state-of-the art. The discretization is based on the Finite Difference method, a method chosen for its simplicity and ease of implementation. This algorithm is applied to the Laplace equation in the context of heat transfer problems and potential flows. The applicability of the proposed algorithm is demonstrated on a number of benchmark problems which clearly confirm the validity of the sensitivity analysis, the most important aspect of any shape optimization problem. This thesis also explores the relative merits of different minimization algorithms and proposes a technique to “fix” meshes when inverted element arises as part of the optimization process. While the problems treated are still elementary when compared to complex multiphysics engineering problems, the new methodology presented in this thesis could apply in principle to arbitrary Partial Differential Equations.
129

Sensitivity Analysis and Distortion Decomposition of Mildly Nonlinear Circuits

Zhu, Guoji January 2007 (has links)
Volterra Series (VS) is often used in the analysis of mildly nonlinear circuits. In this approach, nonlinear circuit analysis is converted into the analysis of a series of linear circuits. The main benefit of this approach is that linear circuit analysis is well established and direct frequency domain analysis of a nonlinear circuit becomes possible. Sensitivity analysis is useful in comparing the quality of two designs and the evaluation of gradient, Jacobian or Hessian matrices, in analog Computer Aided Design. This thesis presents, for the first time, the sensitivity analysis of mildly nonlinear circuits in the frequency domain as an extension of the VS approach. To overcome efficiency limitation due to multiple mixing effects, Nonlinear Transfer Matrix (NTM) is introduced. It is the first explicit analytical representation of the complicated multiple mixing effects. The application of NTM in sensitivity analysis is capable of two orders of magnitude speedup. Per-element distortion decomposition determines the contribution towards the total distortion from an individual nonlinearity. It is useful in design optimization, symbolic simplification and nonlinear model reduction. In this thesis, a numerical distortion decomposition technique is introduced which combines the insight of traditional symbolic analysis with the numerical advantages of SPICE like simulators. The use of NTM leads to an efficient implementation. The proposed method greatly extends the size of the circuit and the complexity of the transistor model over what previous approaches could handle. For example, industry standard compact model, such as BSIM3V3 [35] was used for the first time in distortion analysis. The decomposition can be achieved at device, transistor and block level, all with device level accuracy. The theories have been implemented in a computer program and validated on examples. The proposed methods will leverage the performance of present VS based distortion analysis to the next level.
130

Evaluation environnementale du véhicule électrique : méthodologies et application / Electric vehicle environmental assessment : methodologies and application

Picherit, Marie-Lou 27 September 2010 (has links)
Le véhicule électrique est aujourd’hui présenté comme l’une des solutions alternatives sérieuses au véhicule à moteur à combustion interne, visant à limiter la consommation d’énergies fossiles, ainsi que les émissions de polluants locaux et de gaz à effet de serre. L’évaluation des forces et faiblesses de cette technologie au regard de l’environnement est aujourd’hui limitée, compte tenu notamment du peu de retour d’expérience sur ce type de véhicules.L’objectif de ce travail de recherche est de proposer une approche combinant une connaissance fine du véhicule étudié (obtenu notamment par des essais expérimentaux et l’utilisation de modèles de consommation) et de la méthode d’évaluation environnementale Analyse de Cycle de Vie (ACV), pour identifier les paramètres clefs du bilan environnemental, et par différentes analyses de sensibilité, d’en proposer une analyse détaillée. Pour y parvenir, des essais expérimentaux ont été réalisés sur un véhicule électrique à usage essentiellement urbain et son équivalent thermique. Un modèle permet d’estimer les consommations de véhicules selon leurs spécificités (chimie et capacité de batterie, rendement de la chaîne de traction) et leurs conditions d’utilisation (trafic, usages d’auxiliaires). Des hypothèses et scénarios sont également établis sur la durée de vie des batteries qui équipent le véhicule. Les jeux de données obtenus sont mis en œuvre dans l’ACV d’un véhicule électrique, et les résultats obtenus interprétés puis comparés à ceux du véhicule thermique équivalent. Enfin, analyses de sensibilité et test de divers scénarios permettent l’identification des paramètres clefs du bilan environnemental. / Today, the electric vehicle is seen as a potent substitute to the internal combustion engine vehicle, aiming at reducing consumption of fossil fuels, and emissions of local pollutants and greenhouse gases. The assessment of strengths and weaknesses of this technology from the environmental viewpoint is currently limited, especially considering the lack of experiment feedbacks.The objective of this research is to offer an approach combining a deep understanding of the studied vehicle (through experiments and use of consumption patterns) and the environmental assessment method “Life Cycle Analysis” (LCA), to identify the key parameters of environmental appraisal, and relying on different sensitivity analysis, to propose a detailed analysis.To achieve this, experimental tests were carried out on an urban electric vehicle and its internal combustion engine equivalent. A model was built to estimate the consumption of electric vehicles according to their characteristics (chemistry and battery capacity, vehicle energy efficiency) and use (traffic, use of auxiliaries). Assumptions and scenarios are also made on the lifetimes of batteries in the vehicle. The data sets obtained are implemented in the life cycle analysis of an electric vehicle, and the results are interpreted and compared to its internal combustion engine equivalent vehicle. In the end, sensitivity analysis and test of various scenarios allow the identification of key parameters for the environmental assessment.

Page generated in 0.0691 seconds