• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 21
  • 17
  • 14
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Etude d'un réseau cryogénique multi-clients pour SPIRAL2 / Study of a multi-client cryogenic system for SPIRAL2

Vassal, Adrien 04 November 2019 (has links)
Les travaux de thèse décrits dans ce manuscrit s’intéressent à la modélisation et lecontrôle de procédés cryogéniques associés à un LINAC supraconducteur. Les élémentsconstituants le système cryogénique du LINAC (cryomodules, vannes, tuyaux,...) sontmodélisés sous forme d’objets inter-connectables. Ces mêmes objets sont utilisés pourmodéliser le système cryogénique de l’accélérateur SPIRAL2. La justesse de ces modèlesest évaluée à travers des comparaisons entre simulations non-linéaires et mesures expérimentales.Une fois la capacité de prédiction des modèles validée, ces derniers sont utiliséspour synthétiser des lois de commande. Tout particulièrement, une commande linéairequadratique ainsi qu’un filtre de Kalman étendu ont été réalisés pour les cryomodulesde SPIRAL2. Enfin, la réalisation d’une commande hiérarchisée appliquée au cas d’unréfrigérateur cryogénique est étudiée. Une méthode est proposée pour le contrôle de plusieurssous-systèmes inter-connectés et régulés par des contrôleurs PID, LQ et MPC. / The thesis works describe in this manuscript deal with the modelling and the control ofLINAC cryogenic process. The components of the LINAC cryogenic system (cryomodule,valves, pipes,...) are modeled as interconnectable objects. Those same objects are usedto model the SPIRAL2 cryogenic system. The model trueness is evaluated through comparisonbetween experimental and simulated data. Once the model prediction capabilityhas been validated, the models are used to synthetized control laws. More specifically, alinear quadratic command and an extended Kalman filter has been realized for the SPIRAL2cryomodule. Finally, a hierarchical control for a cryogenic refrigerator has beenstudied. A method is proposed for the control of multiple interconnected subsystemsregulated through PID, LQ and MPC controllers.
32

Techno-economic analysis of integrating renewable electricity and electricity storage in Åland by 2030 : Overview of the current energy situation and definition of four possible environmentally friendly pathways

Nikzad, Dario January 2019 (has links)
The study focuses on the possible positive impacts derived from implementing innovative energy solutions to the Åland energy system by 2030. Four scenarios are formulated in order to determine feasible solutions in economic and technological terms. At the present most of the energy supply relies on the power exchange with mainland by subsea interconnections. The archipelago’s main challenge is to reduce the high dependence from the main importer (Sweden) by increasing the use of local renewable energy sources. Wind power results to be the most favorable form of variable renewable energy (VRE) available. “Behind the meter” photovoltaic (PV) rooftop solar panels, biomass combined heat and power (CHP) generation and a Li-ion battery system are considered as supportive solutions to wind power. The simulations made with RetScreen and EnergyPLAN confirm that solar power and a battery system can only have a modest role compared to wind power. A final economic analysis assesses the revenue projections for the new technologies implemented. The results indicate a very positive investment potential for the new wind farms, coupled with a proper Li-ion battery solution. Additionally, the thesis investigates the best options for solving frequency and voltage imbalances, appearing after the implementation of intermittent energy sources. A flywheel technology has been included in the scenarios in order to enhance the primary frequency control of the whole system.
33

Techno-economic evaluation of hydrochar via hydrothermal carbonisation of organicresidues

Abdullahi, Abdirahman January 2022 (has links)
This thesis has investigated the techno-economic feasibility of upgrading the sludge from a chemical pulp mill to hydrochar via hydrothermal carbonization (HTC). The intended use of the hydrochar was to replace fossil coal within metallurgical applications in the iron and steel industry. Process models were developed in order to obtain mass and energy balances of the HTC process for different technical configurations. The balances were used to evaluate the economic performance, in terms of hydrochar production cost as well as different profitability parameters. Two main scenarios were investigated: Scenario-1: HTC process integrated with the pulp millScenario-2: Stand alone HTC process.To see the effect of having one or two HTC reactors, two cases were developed for each scenario, where the first case used only mixed sludge from the pulp mill as feedstock for the HTC process (case 1, one reactor), while the second case used both mixed sludge and bark as feedstock (case 2, two reactors). In scenario 1, the effects on the pulp mill’s mass and energy balances of integrating the HTC process were investigated. The results showed only very small impacts on the pulp mill, due to that the HTC process is significantly smaller than the mill. The total amount of steam to the steam turbine increased by 0.8 % and 0.9 %, for case 1 and 2, respectively. In combination with the removed sludge, which is otherwise combusted in the mill’s socalled power boiler, this entailed a total increase of the wood fuel consumption in the boiler by 3.2 % and 3.6 %, respectively. By implementing a second HTC reactor, the production cost of hydrochar could in the integrated scenario (scenario 1) be decreased from 4 600 SEK/ton (case 1) to 3 700 SEK/ton (case 2). The corresponding production costs in the stand alone scenario (scenario 2) amounted to 5 400 SEK/ton (case 1) and 4 200 SEK/ton (case 2), respectively. Both integration with the pulp mill and increasing the HTC production scale were thus found to be strategies that can lead to decreased hydrochar production cost. However, even the lowest production cost noted in this report is significantly higher than the corresponding price of coal. This indicates that other measures are required in order for hydrochar to become cost competitive to fossil coal in the metallurgical industry. Examples are the possibility to use even lower-cost feedstocks, as well as policy tools targeting, e.g., the CO2 emissions from using fossil materials and energy carriers in the iron and steel industry. Based on the results from the investment calculation, it is concluded that the HTC process integrated with a pulp mill is preferable compared to a stand alone HTC process. The reason why integrated HTC is preferred is that it gives higher NPV and correspondingly lower payback time, as well as lower hydrochar production costs. / Denna examensarbete har undersökt den tekno-ekonomiska genomförbarheten av att uppgradera slammet från ett kemisk massabruk till hydrokol via hydrotermisk karbonisering (HTC). Den avsedda användningen av hydrokol var att ersätta fossilt kol inom metallurgiska tillämp- ningar i järn och stålindustri. Processmodeller utvecklades för att erhålla mass- och energibalanser för HTC-processen. Balanserna användes för att utvärdera de ekonomiska prestanda, i form av produktionskostnad för hydrokol samt olika lönsamhetsparametrar. Följande två huvudscenarier undersöktes: Scenario-1: HTC-processen integrerad med massabruket Scenario-2: Fristående HTC-process. För att se effekten av att ha en eller två HTC-reaktorer utvecklades två fall för varje scenario, där det första fallet endast använde blandat slam från massabruket som råvara för HTC-processen (fall 1, en reaktor), medan det andra fallet använde både blandat slam och bark som råmaterial (fall 2, två reaktorer). I scenario 1 undersöktes effekterna på massabrukets mass- och energibalanser av att integrera HTC-processen. Resultaten visade endast mycket små effekter på massabruket, på grund av att HTC-processen är betydligt mindre än bruket. Den totala mängden ånga till ångturbinen ökade med 0.8 % och 0.9 % för fall 1 respektive 2. I kombination med bortfall av slammet, som annars förbränns i brukets barkpanna, innebar detta en total ökning av förbrukningen av trädbränsle i pannan med 3.2 % respektive 3.6 %. Genom att implementera en andra HTC-reaktor skulle produktionskostnaden för hydrokol i det integrerade scenariot (scenario 1) kunna sänkas från 4 600 SEK/ton (fall 1) till 3 700 SEK/ton (fall 2). Motsvarande produktionskostnader i det fristående scenariot (scenario 2) uppgick till 5 400 SEK/ton (fall 1) respektive 4 200 SEK/ton (fall 2). Både integration med massabruk och ökning av produktionskapaciteten av HTC visade sig därför vara strategier som kan leda till minskade produktionskostnader för hydrokol . Men även den lägsta produktionskostnaden som noteras i denna rapport är betydligt högre än motsvarande pris på kol. Detta tyder på att det krävs andra åtgärder för att hydrokol ska bli konkurrenskraftigt ur kostnadssynpunkt gentemot fossilt kol i den metallurgiska industrin. Exempel är möjligheten att använda ännu billigare råvaror, såväl som policyverktyg som riktar in sig på t.ex. CO2-utsläppen från användning av fossila material och energibärare inom järn- och stålindustrin. Baserat på resultaten från investeringskalkylen dras slutsatsen att HTC-processen integrerad med ett massabruk är att föredra jämfört med en fristående HTC-process. Anledningen till att integrerad HTC föredras är att det ger högre netto nuvärde (NPV) och motsvarande lägre återbetalningstid, samt lägre produktionskostnader för hydrokol.
34

An empirical investigation of information systems success. An analysis of the factors affecting banking information systems success in Egypt.

Hussein, Safaa A. January 2009 (has links)
Information technology (IT) plays an important role in contemporary organisations and this role continues to expand in scope and complexity and affects business operations dramatically. Advances in the IT industry have caused major changes in every industry sector. The banking industry is no exception and it has undergone a dramatic change over the past few decades. With the coming of the information age, IS investments are becoming increasingly important to banks` survival, growth and prosperity. IS managers are under increasing pressure to justify the value and contribution of IS expenditure to the productivity, quality and competitiveness of the organisation. This study aims to propose a model which investigates the success of information systems in the banking industry in order to help bank managers to evaluate the success of their IS, to be able to develop these systems and to improve the performance of bank managers and employees. Given that the ultimate dependent variable for this research is individual impacts, DeLone and McLean (2003) updated IS success model is leveraged and extended in this research. The study proposes a research model which is guided by the decision to select a suitable number of key potential demographic and situational variables, in addition to the adoption of DeLone and McLean (2003) updated model. This model proposes that a variety of factors were found to affect IS success in general, however, from the socio-technical viewpoint, IS success should capture both technological and human elements. Therefore, an effective Banking Information System (BIS) typically requires an appropriate combination of both. As such, Thus, the technological dimensions (i.e. system, service and information quality) and the human dimensions (e.g. user satisfaction, perceived system benefits, user involvement, user training, age, education and system use) can be a good starting point when considering suitable constructs for measuring BIS success. The research methodology of this study involved interviews with BIS practitioners and professionals to shape and refine the research model. Further, questionnaire survey was employed to collect data from bank managers in Egyptian banks. Structural Equation Modelling (SEM) using Partial Least Square (PLS) was used to test the research model. Three research models were proposed according to age groups and initial results from PLS analysis reported different results in each research model. Findings indicated that system, information and service quality, level of training, age, length of system use, user involvement and top management support were the main predictors (success constructs) of user satisfaction and individual impacts in the three proposed research models. However, the relationships between these constructs varied according to each age group of managers. The study offers important academic and practical contributions. Firstly, as a contribution to research, the study serves to extend the DeLone and McLean (2003) IS success model by introducing some key human and situational dimensions and confirming certain links in that model with the context of banking industry. The contribution to practice is especially relevant for bank CIOs, software designers and developers looking for ways to improve BIS developments by providing them with directions regarding the BIS success dimensions that should be considered to encourage bank managers to adopt and be more satisfied with BIS which in turn influence their job performance. / Egyptian Higher Education Ministry
35

Model Order Reduction and Control of an Organic Rankine Cycle Waste Heat Recovery System

Riddle, Derek S. January 2017 (has links)
No description available.
36

Modelling and stochastic simulation of synthetic biological Boolean gates

Sanassy, D., Fellerman, H., Krasnogor, N., Konur, Savas, Mierla, L.M., Gheorghe, Marian, Ladroue, C., Kalvala, S. January 2014 (has links)
No / Synthetic Biology aspires to design, compose and engineer biological systems that implement specified behaviour. When designing such systems, hypothesis testing via computational modelling and simulation is vital in order to reduce the need of costly wet lab experiments. As a case study, we discuss the use of computational modelling and stochastic simulation for engineered genetic circuits that implement Boolean AND and OR gates that have been reported in the literature. We present performance analysis results for nine different state-of-the-art stochastic simulation algorithms and analyse the dynamic behaviour of the proposed gates. Stochastic simulations verify the desired functioning of the proposed gate designs.
37

Anthropogenic influence on climate through changes in aerosol emissions from air pollution and land use change

Acosta Navarro, Juan Camilo January 2017 (has links)
Particulate matter suspended in air (i.e. aerosol particles) exerts a substantial influence on the climate of our planet and is responsible for causing severe public health problems in many regions across the globe. Human activities have altered the natural and anthropogenic emissions of aerosol particles through direct emissions or indirectly by modifying natural sources. The climate effects of the latter have been largely overlooked. Humans have dramatically altered the land surface of the planet causing changes in natural aerosol emissions from vegetated areas. Regulation on anthropogenic and natural aerosol emissions have the potential to affect the climate on regional to global scales. Furthermore, the regional climate effects of aerosol particles could potentially be very different than the ones caused by other climate forcers (e.g. well mixed greenhouse gases). The main objective of this work was to investigate the climatic effects of land use and air pollution via aerosol changes. Using numerical model simulations it was found that land use changes in the past millennium have likely caused a positive radiative forcing via aerosol climate interactions. The forcing is an order of magnitude smaller and has an opposite sign than the radiative forcing caused by direct aerosol emissions changes from other human activities. The results also indicate that future reductions of fossil fuel aerosols via air quality regulations may lead to an additional warming of the planet by mid-21st century and could also cause an important Arctic amplification of the warming. In addition, the mean position of the intertropical convergence zone and the Asian monsoon appear to be sensitive to aerosol emission reductions from air quality regulations. For these reasons, climate mitigation policies should take into consideration aerosol air pollution, which has not received sufficient attention in the past.
38

Dynamic System Modeling And State Estimation For Speech Signal

Ozbek, Ibrahim Yucel 01 May 2010 (has links) (PDF)
This thesis presents an all-inclusive framework on how the current formant tracking and audio (and/or visual)-to-articulatory inversion algorithms can be improved. The possible improvements are summarized as follows: The first part of the thesis investigates the problem of the formant frequency estimation when the number of formants to be estimated fixed or variable respectively. The fixed number of formant tracking method is based on the assumption that the number of formant frequencies is fixed along the speech utterance. The proposed algorithm is based on the combination of a dynamic programming algorithm and Kalman filtering/smoothing. In this method, the speech signal is divided into voiced and unvoiced segments, and the formant candidates are associated via dynamic programming algorithm for each voiced and unvoiced part separately. Individual adaptive Kalman filtering/smoothing is used to perform the formant frequency estimation. The performance of the proposed algorithm is compared with some algorithms given in the literature. The variable number of formant tracking method considers those formant frequencies which are visible in the spectrogram. Therefore, the number of formant frequencies is not fixed and they can change along the speech waveform. In that case, it is also necessary to estimate the number of formants to track. For this purpose, the proposed algorithm uses extra logic (formant track start/end decision unit). The measurement update of each individual formant trajectories is handled via Kalman filters. The performance of the proposed algorithm is illustrated by some examples The second part of this thesis is concerned with improving audiovisual to articulatory inversion performance. The related studies can be examined in two parts / Gaussian mixture model (GMM) regression based inversion and Jump Markov Linear System (JMLS) based inversion. GMM regression based inversion method involves modeling audio (and /or visual) and articulatory data as a joint Gaussian mixture model. The conditional expectation of this distribution gives the desired articulatory estimate. In this method, we examine the usefulness of the combination of various acoustic features and effectiveness of various types of fusion techniques in combination with audiovisual features. Also, we propose dynamic smoothing methods to smooth articulatory trajectories. The performance of the proposed algorithm is illustrated and compared with conventional algorithms. JMLS inversion involves tying the acoustic (and/or visual) spaces and articulatory space via multiple state space representations. In this way, the articulatory inversion problem is converted into the state estimation problem where the audiovisual data are considered as measurements and articulatory positions are state variables. The proposed inversion method first learns the parameter set of the state space model via an expectation maximization (EM) based algorithm and the state estimation is handled via interactive multiple model (IMM) filter/smoother.
39

Verslo transakcijų specifikavimas kuriant verslo valdymo sistemas / Specification of business transactions for enterprise system modelling

Budzinauskas, Donatas 29 January 2008 (has links)
Daugeliui įmonių ar organizacijų reikalingos informacinės sistemos, skirtos valdytį jų procesus duomenis. Darbe analizuojamos verslo procesų, transakcijų modeliavimo galimybės, technologijos, standartai: RUP, UML, XML (XMI), MERODE, R. Gusto, WAE, BPMN, MDA, Agile. Deliau analizuojamas verslo procesų modelis ir jo plėtinys BPMN. Pateikti pasiųlymai notacijos išplėtimui duomenų objektais, modelių ir elementų interpretacija modeliuojant o modelius panaudojant kodo generavime. Sudarytas ir ištestuotas pilnai funkcionuojančio ir dalykinę sritį atitinkančio kodo generavimo algoritmas. / In nowadays a lot of business companies and organizations are using information systems. This paper analyses business transactions, tools and technologies for transactions and process modelling of information and enterprise systems. Analyzed popular technologies like UML and its extensions, RUP, XMI, MDA. Familiarized with MERODE and Agile methods. More closely analyzed BPMN notation and its possibilities for code generation. BPMN notation augment with data objects for better code generation solutions. It also gives better understanding of data flows and influence of business process and transactions. Defined process and transactions modelling strategy which allows better code generations solution too. Made-up, practically materialized ant tested code generation algorithm.
40

3-D Scene Reconstruction from Multiple Photometric Images

Forne, Christopher Jes January 2007 (has links)
This thesis deals with the problem of three dimensional scene reconstruction from multiple camera images. This is a well established problem in computer vision and has been significantly researched. In recent years some excellent results have been achieved, however existing algorithms often fall short of many biological systems in terms of robustness and generality. The aim of this research was to develop improved algorithms for reconstructing 3D scenes, with a focus on accurate system modelling and correctly dealing with occlusions. With scene reconstruction the objective is to infer scene parameters describing the 3D structure of the scene from the data given by camera images. This is an illposed inverse problem, where an exact solution cannot be guaranteed. The use of a statistical approach to deal with the scene reconstruction problem is introduced and the differences between maximum a priori (MAP) and minimum mean square estimate (MMSE) considered. It is discussed how traditional stereo matching can be performed using a volumetric scene model. An improved model describing the relationship between the camera data and a discrete model of the scene is presented. This highlights some of the common causes of modelling errors, enabling them to be dealt with objectively. The problems posed by occlusions are considered. Using a greedy algorithm the scene is progressively reconstructed to account for visibility interactions between regions and the idea of a complete scene estimate is established. Some simple and improved techniques for reliably assigning opaque voxels are developed, making use of prior information. Problems with variations in the imaging convolution kernel between images motivate the development of a pixel dissimilarity measure. Belief propagation is then applied to better utilise prior information and obtain an improved global optimum. A new volumetric factor graph model is presented which represents the joint probability distribution of the scene and imaging system. By utilising the structure of the local compatibility functions, an efficient procedure for updating the messages is detailed. To help convergence, a novel approach of accentuating beliefs is shown. Results demonstrate the validity of this approach, however the reconstruction error is similar or slightly higher than from the Greedy algorithm. To simplify the volumetric model, a new approach to belief propagation is demonstrated by applying it to a dynamic model. This approach is developed as an alternative to the full volumetric model because it is less memory and computationally intensive. Using a factor graph, a volumetric known visibility model is presented which ensures the scene is complete with respect to all the camera images. Dynamic updating is also applied to a simpler single depth-map model. Results show this approach is unsuitable for the volumetric known visibility model, however, improved results are obtained with the simple depth-map model.

Page generated in 0.0944 seconds