• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 27
  • 11
  • 8
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 19
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Evaluation and Implementation of Machine Learning Methods for an Optimized Web Service Selection in a Future Service Market

Karg, Philipp January 2014 (has links)
In future service markets a selection of functionally equal services is omnipresent. The evolving challenge, finding the best-fit service, requires a distinction between the non-functional service characteristics (e.g., response time, price, availability). Service providers commonly capture those quality characteristics in so-called Service Level Agreements (SLAs). However, a service selection based on SLAs is inadequate, because the static SLAs generally do not consider the dynamic service behaviors and quality changes in a service-oriented environment. Furthermore, the profit-oriented service providers tend to embellish their SLAs by flexibly handling their correctness. Within the SOC (Service Oriented Computing) research project of the Karlsruhe University of Applied Sciences and the Linnaeus University of Sweden, a service broker framework for an optimized web service selection is introduced. Instead of relying on the providers’ quality assertions, a distributed knowledge is developed by automatically monitoring and measuring the service quality during each service consumption. The broker aims at optimizing the service selection based on the past real service performances and the defined quality preferences of a unique consumer.This thesis work concerns the design, implementation and evaluation of appropriate machine learning methods with focus on the broker’s best-fit web service selection. Within the time-critical service optimization the performance and scalability of the broker’s machine learning plays an important role. Therefore, high- performance algorithms for predicting the future non-functional service characteristics within a continuous machine learning process were implemented. The introduced so-called foreground-/background-model enables to separate the real-time request for a best-fit service selection from the time-consuming machine learning. The best-fit services for certain consumer call contexts (e.g., call location and time, quality preferences) are continuously pre-determined within the asynchronous background-model. Through this any performance issues within the critical path from the service request up to the best-fit service recommendation are eliminated. For evaluating the implemented best-fit service selection a sophisticated test data scenario with real-world characteristics was created showing services with different volatile performances, cyclic performance behaviors and performance changes in the course of time. Besides the significantly improved performance, the new implementation achieved an overall high selection accuracy. It was possible to determine in 70% of all service optimizations the actual best-fit service and in 94% of all service optimizations the actual two best-fit services.
92

Online Optimization Of RED Routers

Vaidya, Rahul 03 1900 (has links) (PDF)
No description available.
93

Kritické tepelné toky na hladkých a upravených površích / Critical Heat Flux on Smooth and Modified Surfaces

Suk, Ladislav January 2021 (has links)
This thesis deals with the problem of critical heat flux (CHF) on technically smooth and treated surfaces at low pressures. The theoretical part presents the basic concepts of two-phase flow and an analysis of existing work on the influence of the surface on CHF. The main part of the work describes the built experimental apparatus for CHF research at low pressures of 100 -1500 kPa (1-15 bar) with a vertical internally heated annular test section. The internal annuli consists of an outer glass tube with an inner diameter of 14.8 mm and an inner tube made of Inconel ™ 625 / Optimized ZIRLO ™ with an outer diameter of 9.14 mm and a heated length of 380/365 mm. CHF experiments on technically smooth surface were performed at outlet pressures 120 kPa, 200 kPa and 300 kPa, at an inlet temperature of 64, 78 and 91 °C and at mass flux of 400, 500, 600 and 800 kg / m2s. The Inconel tubes were tested in two different surface modifications - abraded and bead blasted. Experiments were performed at mass flows of 400, 500 and 600 kg / m2s. The total number of 122 experimental runs were conducted and the results were compared with other literature experimental data. The maximum increase of CHF on abraded / bead blasted tube was 18.12% / 16.17%. The surface structure was analysed by laser microscopy. The wetting behaviour of the surface structures was measured by the sessile drop method. The elemental analysis of the surface was evaluated using the EDS method.
94

Simulace a analýza provozu blokové šifry se statistickou samosynchronizací / Simulation and analysis of the block cipher mode with statistical self-synchronization

Kopčan, Marek January 2008 (has links)
There is a enormous rise in importance of cryptography. In age of hi-technologies, where information are the most valuable asset, is need to protect this value. But we need to transport information between us and keep information confidental. In this case we use special modes of block cipher because of defect in communication canal. Not all modes are able to deal with this problem. For this purpose, there are special modes. This work deal with self-synchronization modes of block cipher. It is protection of tranfered information in communication canal against different types of defects. We will exam two self-synchronization modes - OCFB (Optimized Cipher FeedBack) and SCFB (Statistical Cipher FeedBack). Both have their advantages and disadvantages. The goal of this work is to provide analyse of both modes and to create simulation model. This model should help with further research of self-synchronization modes.
95

Dynamische Rissdetektion mittels photogrammetrischer Verfahren – Entwicklung und Anwendung optimierter Algorithmen

Hampel, Uwe, Maas, Hans-Gerd 03 June 2009 (has links)
Die digitale Nahbereichsphotogrammetrie ermöglicht eine effiziente Erfassung dreidimensionaler Objektoberflächen bei experimentellen Untersuchungen. Besonders für die flächenhafte Erfassung von Verformungen und die Rissdetektion sind photogrammetrische Verfahren – unter Beachtung entsprechender Randbedingungen – prinzipiell geeignet. Der Beitrag geht unter Einbeziehung aktueller Untersuchungen an textilbewehrten Betonproben auf die Problematik der Rissdetektion ein und gibt einen Überblick über den Entwicklungsstand und das erreichbare Genauigkeitspotential. In Bezug auf die praktische Anwendung der vorgestellten Verfahren wird abschließend auf verschiedene Möglichkeiten der Optimierung eingegangen.
96

Mécanismes chimiques virtuels optimisés pour la prédiction des polluants dans des flammes turbulentes / Virtual chemical mechanisms optimized to capture pollutant formation in turbulent flames

Cailler, Mélody 08 October 2018 (has links)
La nature conflictuelle des contraintes de performances, d'opérabilité et de respect des normes environnementales conduit les motoristes à optimiser finement la géométrie du brûleur afin d'identifier le meilleur design.La Simulation aux Grande Echelles (SGE) est aujourd'hui un outil performant et est déployé de manière courante dans les Bureaux d'Etudes pour la prédiction des propriétés macroscopiques de l'écoulement.Toutefois, de nombreux phénomènes influencés par les effets de chimie complexe, tels que la stabilisation, l'extinction de flamme et la formation des polluants, reste un problème crucial.En effet, la description des effets de chimie complexe nécessite l'utilisation de modèles cinétiques détaillés imposant des coûts de calculs prohibitifs, des problèmes de raideurs numérique et des difficultés de couplage avec les échelles non résolues turbulentes.Afin d'inclure une description des processus chimiques, dans les simulations numériques de chambres de combustion réelles, des modèles réduits doivent être proposés.Dans cette thèse, une méthode originale, appelée chimie virtuelle optimisée, est développée.Cette stratégie a pour objectif la description de la structure chimique de la flamme et la formation des polluants dans des configurations de flamme représentatives.Les schémas cinétiques virtuels optimisés, composés de réactions virtuelles et d'espèces virtuelles, sont construits par optimisation des paramètres réactionnels et des propriétés thermochimiques des espèces virtuelles afin de capturer les propriétés de flamme d'intérêt. / The conflicting nature of performance, operability and environmental constraints leads engine manufacturers to perform a fine optimization of the burner geometry to find the best design compromise.Large Eddy Simulation (LES) is an attractive tool to achieve this challenging task, and is routinely used in design office to capture macroscopic flow features.However, the prediction of phenomena influenced by complex kinetic effects, such as flame stabilization, extinction and pollutant formation, is still a crucial issue.Indeed, the comprehensive description of combustion chemistry effects requires the use of detailed models imposing prohibitive computational costs, numerical stiffness and difficulties related to model the coupling with unresolved turbulent scales.Reduced-cost chemistry description strategies must then be proposed to account for kinetic effects in LES of real combustion chambers.In this thesis an original modeling approach, called virtual optimized chemistry, is developed.This strategy aims at describing the chemical flame structure and pollutant formation in relevant flame configurations, at a low computational cost.Virtual optimized kinetic schemes, composed by virtual reactions and virtual species, are built through optimization of both kinetic rate parameters and virtual species thermo-chemical properties so as to capture reference target flame quantity.
97

Biochimie fonctionnelle des diacylglycérol acyltransférases ; apports à la biologie de synthèse des huiles / Functional study of diacylglycerol acyltransferases; toward oil synthetic biology

Aymé, Laure 20 October 2016 (has links)
Les triglycérides (TG) représentent une réserve énergétique essentielle à de nombreuses cellules. De compositions très variées, ils sont le principal constituant de l’huile destinée à l’alimentation, ou utilisée pour produire différents composés d’intérêt industriel. Les Acyl-CoA : diacylglycérol acyltransférases (DGAT) catalysent l’étape finale et limitante de leur synthèse en incorporant un acide gras sur un diglycéride. Chez les végétaux, il existe trois familles, DGAT1, DGAT2 et DGAT3, ne partageant aucune homologie et pour lesquelles aucune structure n’est connue. Ceci empêche toute amélioration de la qualité des huiles par une approche rationnelle. La contribution des DGAT1 à l’accumulation d’huiles alimentaires a été démontrée. Chez certains végétaux, les DGAT2 ont un rôle prépondérant dans lasynthèse de TG peu communs tels que ceux hydroxylés du ricin permettant de produire des lubrifiants et des bioplastiques. La contribution des DGAT3 à la synthèse des TG reste à déterminer in planta.Nous avons étudié les trois familles de DGAT de la plante modèle Arabidopsis thaliana, appartenant à la famille du colza, ainsi qu’une DGAT1 du palmier à huile, plante de culture industrielle. L’expression en bactéries, en levure modèle ou oléagineuse ainsi que l’étude de lignées de plantes mutantes ont permis de caractériser finement les activités de ces enzymes. La modulation de la composition et du contenu en TG des levures par les DGAT a également démontré l’intérêt de ces enzymes pour la production d’huiles microbiennes à façon. / Triacylglycerols (TAG) are an essential energy storage in many cells. Their composition is diverse; they are the main component of the seed oil for the food industry or used to produce industrial compounds. Acyl-CoA: diacylglycerol acyltransferase (DGAT) catalyze the final and rate-limiting step of TAG synthesis by transferring a fatty acid onto a diacylglycerol. In plants, there are three families, DGAT1, DGAT2 and DGAT3, sharing no homology and of unknown structure. It prevents any improvement of seed oil yield and quality by a rational approach. DGAT1 involvement in edible oil accumulation was demonstrated. In some plants, DGAT2 plays a key role in the synthesis of unusual TAG such as hydroxylated TAG found in castor oil and used to produce lubricants and bioplastics. DGAT3 contribution to TAG biosynthesis has not been demonstrated in planta. We studied three families of DGAT from the model plant Arabidopsis thaliana, belonging to the same family as oilseed rape, and a DGAT1 from oil palm, an industrial crop. DGAT expression in bacteria, yeasts and the study of mutant plant lines allowed us to characterize their activities. The modulation of yeast TAG content and composition induced by DGAT expression demonstrated the value of these enzymes for the production of tailored microbial oils.
98

On error-robust source coding with image coding applications

Andersson, Tomas January 2006 (has links)
This thesis treats the problem of source coding in situations where the encoded data is subject to errors. The typical scenario is a communication system, where source data such as speech or images should be transmitted from one point to another. A problem is that most communication systems introduce some sort of error in the transmission. A wireless communication link is prone to introduce individual bit errors, while in a packet based network, such as the Internet, packet losses are the main source of error. The traditional approach to this problem is to add error correcting codes on top of the encoded source data, or to employ some scheme for retransmission of lost or corrupted data. The source coding problem is then treated under the assumption that all data that is transmitted from the source encoder reaches the source decoder on the receiving end without any errors. This thesis takes another approach to the problem and treats source and channel coding jointly under the assumption that there is some knowledge about the channel that will be used for transmission. Such joint source--channel coding schemes have potential benefits over the traditional separated approach. More specifically, joint source--channel coding can typically achieve better performance using shorter codes than the separated approach. This is useful in scenarios with constraints on the delay of the system. Two different flavors of joint source--channel coding are treated in this thesis; multiple description coding and channel optimized vector quantization. Channel optimized vector quantization is a technique to directly incorporate knowledge about the channel into the source coder. This thesis contributes to the field by using channel optimized vector quantization in a couple of new scenarios. Multiple description coding is the concept of encoding a source using several different descriptions in order to provide robustness in systems with losses in the transmission. One contribution of this thesis is an improvement to an existing multiple description coding scheme and another contribution is to put multiple description coding in the context of channel optimized vector quantization. The thesis also presents a simple image coder which is used to evaluate some of the results on channel optimized vector quantization. / QC 20101108
99

Strukturell Design för Bostadshus

Kherfan, Rashid January 2023 (has links)
Fördröjningar till följd av icke optimerade samarbeten mellan olika teknikområden kan föranleda stora ekonomiska förluster och även stor påverkan på produktionsresultat, detta är ett problem som lyfts fram i denna studie och därför anses mitt studie vara viktigt. SyfteSyftet är att presentera en referensvara som lyfter upp ett arbetssätt för optimering av samspelet mellan arkitektur och konstruktion. MetodStudien involverade informella intervjuer med erfarna konstruktörer och arkitekter för att få insikter om samarbete i projekteringsprocessen. En litteraturgenomgång genomfördes för att få en bredare förståelse baserat på forskningsartiklar och böcker inom arkitektur och byggteknik. Designprocessen i ett byggprojekt beskrevs, där samverkan och kommunikation mellan olika yrkesgrupper är avgörande för att uppnå målen. Dimensioneringskontroll och extern granskning användes för att säkerställa att säkerhets- och hållfasthetskraven uppfylls. ResultatArkitekts- och konstruktionsresultat representeras samtidigt som ritningar och dimensioneringar av de bärande delar redovisas i detaljer med utförliga beräkningar, planlösningar och diagram. SlutsatsGenom en effektiv och välorganiserad samverkan mellan arkitekten och konstruktören kan konflikter minimeras och potentiella problemlösas tidigt i projekteringsprocessen. / Delays caused by non-optimized collaboration between different technical areas can lead to significant economic losses and have a major impact on production results, which is a problem highlighted in this study and therefore our study is considered important. The purpose is to present a reference document that highlights an approach for optimizing the interaction between architecture and construction. The study involved informal interviews with experienced designers and architects to gain insights into collaboration in the design process. Aliterature review was conducted to provide a broader understanding based on research articles and books in the fields of architecture and construction engineering. The design process in a construction project was described, highlighting the crucial role of collaboration and communication among different professional groups in achieving project goals. Dimension control and external review were utilized to ensure compliance with safety and structural integrity requirements. Architectural and structural outcomes are represented simultaneously, while drawings and dimensioning of the load-bearing components are presented in detail with comprehensive calculations, floor plans, and diagrams. Through effective and well-organized collaboration between the architect and the structural engineer, conflicts can be minimized, and potential issues can be resolved early in the design process. / <p>2023-06-29</p>
100

[pt] AJUSTE ÓTIMO POR LEVENBERG-MARQUARDT DE MÉTODOS DE PREVISÃO PARA INICIAÇÃO DE TRINCA / [en] OPTIMAL FIT BY LEVENBERG-MARQUARDT OF PREDICTION METHODS FOR CRACK INITIATION

GABRIELA WEGMANN LIMA 01 November 2022 (has links)
[pt] A grande maioria das estruturas que trabalham sob cargas alternadas precisa ser dimensionada para evitar a iniciação de trincas por fadiga, o principal mecanismo de dano mecânico nesses casos. Os vários parâmetros dos modelos de previsão de dano à fadiga usados nesses projetos devem ser preferencialmente medidos a partir do ajuste otimizado de suas equações a dados experimentais medidos de forma adequada. Na realidade, a precisão das previsões baseadas nesses modelos depende diretamente da qualidade dos ajustes utilizados para obtenção desses parâmetros. Sendo assim, o objetivo principal deste trabalho é estudar a melhor maneira de se obter os parâmetros dos principais modelos de previsão da iniciação de trincas por fadiga através de ajustes de dados experimentais baseados no algoritmo de LevenbergMarquardt. Primeiro, foram realizados diversos ensaios εN em uma liga de alumínio 6351-T6 para averiguar o desempenho do ajuste proposto para asequações de Coffin-Manson e de Ramberg-Osgood. Em seguida, foram usados dados da literatura de outros oito materiais para ajustar modelos deformaçãovida clássicos, assim como com o expoente de Walker, para assim avaliar o efeito de cargas médias não-nulas em testes εN. Por fim, foi estudado o ajuste de um modelo SN com expoente de Walker que considera limites de fadiga e efeitos de carga média. Esse estudo também inclui considerações estatísticas para quantificar o fator de confiabilidade a partir de diferentes hipóteses de funções densidade de probabilidade, baseadas em dez conjuntos de dados da literatura. / [en] Most structures working under alternate loadings must be dimensioned to prevent fatigue crack initiation, the main mechanism of mechanical damage in these cases. The various parameters from the fatigue damage prediction models used in these projects should preferably be measured by optimally fitting their equations to well-measured experimental data. In fact, the accuracy of the predictions based on these models depends directly on the quality of the adjustments used to obtain these parameters. As a result, the main purpose of this work is to study the best way to obtain the parameters of the leading prediction models of fatigue crack initiation through experimental data fittings based on the Levenberg-Marquardt algorithm. First, several εN tests were performed on a 6351-T6 aluminum alloy to verify the performance of the proposed fit for the Coffin-Manson and Ramberg-Osgood equations. Then, data from the literature of eight other materials were used to fit classic strainlife models, as well as models based on the Walker exponent, to evaluate the effect of non-zero mean loads in εN tests. Finally, the fitting of an SN model including the Walker exponent was studied, which considers fatigue limits and mean load effects. This study includes as well statistical considerations to quantify the reliability factor from different probability density function assumptions, based on ten data sets from the literature.

Page generated in 0.0361 seconds