• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Är etiska fondinvesteringar försvarbara : vad kostar etik?

Blad, Tobias, Nilsson, Kristian January 2013 (has links)
Recently, the selections of ethical funds are increased; at the same time investors with social and moral preferences have increased in the capital market. There are currently debates on whether ethical funds perform better or worse than funds without ethical criteria. This also involves a vivid discussion on whether investors know about the consequences of investing in ethical funds. Therefore, this study involves theories about the rationality and decision theory, in addition to what mainly control the investment decisions. The study also discusses portfolio theory since this is one of the underlying theories behind fund management and its development. The purpose of this study is to explain if ethical limitations in the selection of securities affect risk and return in the fund portfolio. This study has positivistic research approach and a quantitative method. The method involves some calculations and equations used that will be presented. The study indicates that ethical funds actually deliver better risk adjusted returns than its Swedish benchmark index. It also indicates a trend that ethical funds deliver better excess returns than funds without ethical preferences. Furthermore, the study concludes that no statistical difference between the ethical funds and the benchmark in terms of risk in investment can be demonstrated. One limitation of the study is that the survey mainly includes the Swedish market with Swedish company stakeholders. This means that there may be a need for research in a global market where also cultural aspects need to be considered. It has not previously been quite clear how ethical fund performance in the Swedish market, and at a time when demand for ethical choice increases, the goal of this study was to clarify the performance of ethical funds. / På senare tid har urvalet av etiska fonder ökat, samtidigt som investerare med sociala och moraliska preferenser har blivit allt vanligare på kapitalmarknaden. Det finns idag diskussioner om huruvida etiska fonder presterar bättre eller sämre än fonder utan de etiska kriterierna. Detta innefattar också en livlig diskussion om huruvida investerare är medvetna om konsekvenserna av att investera i etiska fonder. Därför berör studien teorierna kring den rationella- och beslutsteorin, för vilka huvudsakligen styr investeringsbesluten. Studien behandlar också portföljvalsteori då detta är en av de bakomliggande teorierna för fondförvaltning och dess uppkomst. Syftet med denna studie är förklara om de etiska begränsningar i urvalet av värdepapper påverkar risk och avkastning i fondportföljen. I studien kommer en positivistisk forskningssats användas samt en kvantitativ metod. Metoden innefattar en del beräkningar och ekvationer som används och som presenteras. Studien pekar på att de etiska fonderna faktiskt levererar en bättre  riskjusterad  excessavkastning än sitt jämförelseindex på den svenska marknaden. Den visar också tendenser att de etiska fonderna levererar en bättre excessavkastning än de fonderna utan etiska preferenser. Vidare kommer studien fram till att det inte kan påvisas någon statistisk skillnad mellan de etiska fonderna och jämförelseindexet vad det gäller risken i investeringar. En begränsning med studien är att undersökningen huvudsakligen innefattar den svenska marknaden med svenska företagsaktörer. Detta gör att det kan finns behov av forskning på en global marknad där även de kulturella aspekterna behöver bejakas. Eftersom det tidigare har varit ganska oklart vad det gäller etiska fondernas prestanda på den svenska marknaden, och i en tid då efterfrågan på etiska urvalet ökar, är målet med denna studie att bringa klarhet om etiska fonders prestation.
352

Studies On The Dynamics And Stability Of Bicycles

Basu-Mandal, Pradipta 09 1900 (has links)
This thesis studies the dynamics and stability of some bicycles. The dynamics of idealized bicycles is of interest due to complexities associated with the behaviour of this seemingly simple machine. It is also useful as it can be a starting point for analysis of more complicated systems, such as motorcycles with suspensions, frame flexibility and thick tyres. Finally, accurate and reliable analyses of bicycles can provide benchmarks for checking the correctness of general multibody dynamics codes. The first part of the thesis deals with the derivation of fully nonlinear differential equations of motion for a bicycle. Lagrange’s equations are derived along with the constraint equations in an algorithmic way using computer algebra.Then equivalent equations are obtained numerically using a Newton-Euler formulation. The Newton-Euler formulation is less straightforward than the Lagrangian one and it requires the solution of a bigger system of linear equations in the unknowns. However, it is computationally faster because it has been implemented numerically, unlike Lagrange’s equations which involve long analytical expressions that need to be transferred to a numerical computing environment before being integrated. The two sets of equations are validated against each other using consistent initial conditions. The match obtained is, expectedly, very accurate. The second part of the thesis discusses the linearization of the full nonlinear equations of motion. Lagrange’s equations have been used.The equations are linearized and the corresponding eigenvalue problem studied. The eigenvalues are plotted as functions of the forward speed ν of the bicycle. Several eigenmodes, like weave, capsize, and a stable mode called caster, have been identified along with the speed intervals where they are dominant. The results obtained, for certain parameter values, are in complete numerical agreement with those obtained by other independent researchers, and further validate the equations of motion. The bicycle with these parameters is called the benchmark bicycle. The third part of the thesis makes a detailed and comprehensive study of hands-free circular motions of the benchmark bicycle. Various one-parameter families of circular motions have been identified. Three distinct families exist: (1)A handlebar-forward family, starting from capsize bifurcation off straight-line motion, and ending in an unstable static equilibrium with the frame perfectly upright, and the front wheel almost perpendicular. (2) A handlebar-reversed family, starting again from capsize bifurcation, but ending with the front wheel again steered straight, the bicycle spinning infinitely fast in small circles while lying flat in the ground plane. (3) Lastly, a family joining a similar flat spinning motion (with handlebar forward), to a handlebar-reversed limit, circling in dynamic balance at infinite speed, with the frame near upright and the front wheel almost perpendicular; the transition between handlebar forward and reversed is through moderate-speed circular pivoting with the rear wheel not rotating, and the bicycle virtually upright. In the fourth part of this thesis, some of the parameters (both geometrical and inertial) for the benchmark bicycle have been changed and the resulting different bicycles and their circular motions studied showing other families of circular motions. Finally, some of the circular motions have been examined, numerically and analytically, for stability.
353

趨近一般化資料倉儲與資料探勘之效能評估模型 / Toward a More Generalized Benchmark Workload Model for Data Warehouse and Data Mining

邱士涵, Chiu,Shih-Han Unknown Date (has links)
隨著網際網路的發達以及資料庫技術的成熟,人們取得資料變得非常的容易,再加上許多網際網路的應用其實就是一個自動化的資料收集工具,資料量之大已幾近爆炸的程度。資料倉儲便是一種用來儲存大量歷史資料的資料庫,提供彙整或是統計的資訊,以提供決策使用的資訊技術。而資料探勘是從大量的資料當中把對於決策過程中有幫助的規則找出來,提供給管理人員做為決策的參考,開創新的商業契機。資料倉儲的效能表現對於使用者的工作效率有著深遠的影響。因此有些用以衡量與預測資料倉儲之效能與效率之工作量模式便孕育而生,一般稱之為績效評估工具,然而目前所公佈的一般資料倉儲績效評估工具是針對特定範圍領域建構出某些典型的領域規格,並沒有一個使用者需求導向的資料倉儲績效評估工具。在資料探勘方面,探勘結果的準確度比起資料探勘所花費的時間來得重要,目前卻沒有一個有效、使用者需求導向的工具來評估資料探勘結果的準確度。我們針對資料倉儲的效能評估以及資料探勘準確度評估,設計一個以使用者需求為導向的工作量模型,來評估資料倉儲與資料探勘工具。 / As growth of Internet and mature of database technology, people can get the data much easily than before. Many applications on Internet, in fact, are the tools of gather data automatically so that the amount of data is growing bigger and bigger. Data warehouse is one kind of database to store lots of historical data to offer statistical information for the information technology of decisions. Data mining is to find the useful rules for decisions from the amount of data to help the managers make decisions and create the new opportunities of business. The performance of data warehouse is import to user’s work efficiency. Therefore, there are some workload model arise to evaluate and predict the performance and efficiency of data warehouse called benchmark. However, the data warehouse specification announced these days are constructed to some typical domain specific, and the performance evaluation stand on synthetic workload. But, when the difference between the domain of data warehouse user applied and domain of performance evaluation tool is very large, the performance metric may different a lot to the result of benchmark tool. In data mining, the accuracy of mining result is important to business. The accuracy of mining result is more important than the time spend on data mining. However, there is no any useful tool to evaluate the accuracy of mining result and there is no any standard of performance criteria for data mining, either. We design a user requirement-oriented workload to evaluate performance of data warehouse and precision of data mining.
354

Efficient Broadcast for Multicast-Capable Interconnection Networks

Siebert, Christian 20 November 2006 (has links) (PDF)
The broadcast function MPI_Bcast() from the MPI-1.1 standard is one of the most heavily used collective operations for the message passing programming paradigm. This diploma thesis makes use of a feature called "Multicast", which is supported by several network technologies (like Ethernet or InfiniBand), to create an efficient MPI_Bcast() implementation, especially for large communicators and small-sized messages. A preceding analysis of existing real-world applications leads to an algorithm which does not only perform well for synthetical benchmarks but also even better for a wide class of parallel applications. The finally derived broadcast has been implemented for the open source MPI library "Open MPI" using IP multicast. The achieved results prove that the new broadcast is usually always better than existing point-to-point implementations, as soon as the number of MPI processes exceeds the 8 node boundary. The performance gain reaches a factor of 4.9 on 342 nodes, because the new algorithm scales practically independently of the number of involved processes. / Die Broadcastfunktion MPI_Bcast() aus dem MPI-1.1 Standard ist eine der meistgenutzten kollektiven Kommunikationsoperationen des nachrichtenbasierten Programmierparadigmas. Diese Diplomarbeit nutzt die Multicastfähigkeit, die von mehreren Netzwerktechnologien (wie Ethernet oder InfiniBand) bereitgestellt wird, um eine effiziente MPI_Bcast() Implementation zu erschaffen, insbesondere für große Kommunikatoren und kleinere Nachrichtengrößen. Eine vorhergehende Analyse von existierenden parallelen Anwendungen führte dazu, dass der neue Algorithmus nicht nur bei synthetischen Benchmarks gut abschneidet, sondern sein Potential bei echten Anwendungen noch besser entfalten kann. Der letztendlich daraus entstandene Broadcast wurde für die Open-Source MPI Bibliothek "Open MPI" entwickelt und basiert auf IP Multicast. Die erreichten Ergebnisse belegen, dass der neue Broadcast üblicherweise immer besser als jegliche Punkt-zu-Punkt Implementierungen ist, sobald die Anzahl von MPI Prozessen die Grenze von 8 Knoten überschreitet. Der Geschwindigkeitszuwachs erreicht einen Faktor von 4,9 bei 342 Knoten, da der neue Algorithmus praktisch unabhängig von der Knotenzahl skaliert.
355

XML και σχεσιακές βάσεις δεδομένων: πλαίσιο αναφοράς και αξιολόγησης / XML and relational databases: a frame of report and evaluation

Παλιανόπουλος, Ιωάννης 16 May 2007 (has links)
Η eXtensible Markup Language (XML) είναι εμφανώς το επικρατέστερο πρότυπο για αναπαράσταση δεδομένων στον Παγκόσμιο Ιστό. Αποτελεί μια γλώσσα περιγραφής δεδομένων, κατανοητή τόσο από τον άνθρωπο, όσο και από τη μηχανή. Η χρήση της σε αρχικό στάδιο περιορίστηκε στην ανταλλαγή δεδομένων, αλλά λόγω της εκφραστικότητάς της (σε αντίθεση με το σχεσιακό μοντέλο) μπορεί να αποτελέσει ένα αποτελεσματικό \"όχημα\" μεταφοράς και αποθήκευσης πληροφορίας. Οι σύγχρονες εφαρμογές κάνουν χρήση της τεχνολογίας XML εξυπηρετώντας ανάγκες διαλειτουργικότητας και επικοινωνίας. Ωστόσο, θεωρείται βέβαιο ότι η χρήση της σε επίπεδο υποδομής θα ενδυναμώσει περαιτέρω τις σύγχρονες εφαρμογές. Σε επίπεδο υποδομής, μια βάση δεδομένων που διαχειρίζεται την γλώσσα XML είναι σε θέση να πολλαπλασιάσει την αποδοτικότητά της, εφόσον η βάση δεδομένων μετατρέπεται σε βάση πληροφορίας. Έτσι, όσο οι εφαρμογές γίνονται πιο σύνθετες και απαιτητικές, η ενδυνάμωση των βάσεων δεδομένων με τεχνολογίες που φέρουν/εξυπηρετούν τη σημασιολογία των προβλημάτων υπόσχεται αποτελεσματικότερη αντιμετώπιση στο παραπάνω μέτωπο. Αλλά ποιος είναι ο καλύτερος τρόπος αποδοτικού χειρισμού των XML εγγράφων (XML documents); Με μια πρώτη ματιά η απάντηση είναι προφανής. Εφόσον ένα XML έγγραφο αποτελεί παράδειγμα μιας σχετικά νέας τεχνολογίας, γιατί να μη χρησιμοποιηθούν ειδικά συστήματα για το χειρισμό της; Αυτό είναι πράγματι μια βιώσιμη προσέγγιση και υπάρχει σημαντική δραστηριότητα στην κοινότητα των βάσεων δεδομένων που εστιάζει στην εκμετάλλευση αυτής της προσέγγισης. Μάλιστα, για το σκοπό αυτό, έχουν δημιουργηθεί ειδικά συστήματα βάσεων δεδομένων, οι επονομαζόμενες \"Εγγενείς XML Βάσεις Δεδομένων\" (Native XML Databases). Όμως, το μειονέκτημα της χρήσης τέτοιων συστημάτων είναι ότι αυτή η προσέγγιση δεν αξιοποιεί την πολυετή ερευνητική δραστηριότητα που επενδύθηκε για την τεχνολογία των σχεσιακών βάσεων δεδομένων. Είναι πράγματι γεγονός ότι δεν αρκεί η σχεσιακή τεχνολογία και επιβάλλεται η ανάγκη για νέες τεχνικές; Ή μήπως με την κατάλληλη αξιοποίηση των υπαρχόντων συστημάτων μπορεί να επιτευχθεί ποιοτική ενσωμάτωση της XML; Σε αυτήν την εργασία γίνεται μια μελέτη που αφορά στην πιθανή χρησιμοποίηση των σχεσιακών συστημάτων βάσεων δεδομένων για το χειρισμό των XML εγγράφων. Αφού αναλυθούν θεωρητικά οι τρόποι με τους οποίους γίνεται αυτό, στη συνέχεια εκτιμάται πειραματικά η απόδοση σε δύο από τα πιο δημοφιλή σχεσιακά συστήματα βάσεων δεδομένων. Σκοπός είναι η χάραξη ενός πλαισίου αναφοράς για την αποτίμηση και την αξιολόγηση των σχεσιακών βάσεων δεδομένων που υποστηρίζουν XML (XML-enabled RDBMSs). / The eXtensible Markup Language (XML) is obviously the prevailing model for data representation in the World Wide Web (WWW). It is a data description language comprehensible by both humans and computers. Its usage in an initial stage was limited to the exchange of data, but it can constitute an effective \"vehicle\" for transporting, handling and storing of information, due to its expressiveness (contrary to the relational model). Contemporary applications make heavy use of the XML technology in order to support communication and interoperability . However, supporting XML at the infrastructure level would reduce application development time, would make applications almost automatically complient to standards and would make them less error prone. In terms of infrastructure, a database able to handle XML properly would be beneficial to a wide range of applications thus multiplying its efficiency. In this way, as long as the applications become more complex and demanding, the strengthening of databases with technologies that serve the nature of problems, promises more effective confrontation with this topic. But how can XML documents be supported at the infrastructure level? At a first glance, the question is rhetorical. Since XML constitutes a relatively new technology, new XML-aware infrastructures can be built from scratch. This is indeed a viable approach and there is a considerable activity in the research community of databases, which focuses on the exploitation of this approach. In particular, this is the reason why special database systems have been created, called \"Native XML Databases\". However, the disadvantage of using such systems is that this approach does not build on existing knowledge currently present in the relational database field. The research question would be whether relational technology is able to support correctly XML data. In this thesis, we present a study concerned with the question whether relational database management systems (RDBMSs) provide suitable ground for handling XML documents. Having theoretically analyzed the ways with which RDBMSs handle XML, the performance in two of the most popular relational database management systems is then experimentally assessed. The aim is to draw a frame of report on the assessment and the evaluation of relational database management systems that support XML (XML-enabled RDBMSs).
356

A Comparison of EMT, Dynamic Phasor, and Traditional Transient Stability Models

Yang, Rae Rui Ooi 29 October 2014 (has links)
This thesis presents a transient stability method using dynamic phasors. This method can be used to investigate low frequency (<5Hz) and sub-synchronous frequency (5Hz-60Hz) oscillations. It has major advantages as compared to traditional transient stability method and EMT method. It allows modeling of higher-frequency oscillation possible using time domain simulations, which is not achievable with conventional method. It also can be simulated at much larger time step as compared to PSCAD/EMTDC simulation. Comparison of the results with traditional model and detailed EMT model are also present, and they show very accurate results at frequency ranges up to 60Hz.
357

Modelling of tsunami generated by submarine landslides

Sue, Langford Phillip January 2007 (has links)
Tsunami are a fascinating but potentially devastating natural phenomena that have occurred regularly throughout history along New Zealand's shorelines, and around the world. With increasing population and the construction of infrastructure in coastal zones, the effect of these large waves has become a major concern. Many natural phenomena are capable of creating tsunami. Of particular concern is the underwater landslide-induced tsunami, due to the potentially short warning before waves reach the shore. The aims of this research are to generate a quality benchmark dataset suitable for comprehensive comparisons with numerical model results and to increase our understanding of the physical processes involved in tsunami generation. The two-dimensional experimental configuration is based on a benchmark configuration described in the scientific literature, consisting of a semi-elliptical prism sliding down a submerged 15° slope. A unique feature of these experiments is the method developed to measure water surface variation continuously in both space and time. Water levels are obtained using an optical technique based on laser induced fluorescence, which is shown to be comparable in accuracy and resolution to traditional electrical point wave gauges. In the experiments, the landslide density and initial submergence are varied and detailed measurements of wave heights, lengths, propagation speeds, and shore run-up are made. Particle tracking velocimetry is used to record the landslide kinematics and sub-surface water velocities. Particular attention is paid to maintaining a high level of test repeatability throughout the experimental process. The experimental results show that a region of high pressure ahead of the landslide forces up the water over the front half of the landslide to form the leading wave crest, which propagates ahead of the landslide. The accelerating fluid above, and the turbulent wake behind, the moving landslide create a region of low pressure, which draws down the water surface above the rear half of the landslide to form the leading trough. Differences in the phase and group velocities of the components in the wave packet cause waves to be continually generated on the trailing end of the wave train. The downstream position that these waves form continually moves downstream with time and the wave packet is found to be highly dispersive. The interaction of the landslide pressure field with the free surface wave pressure field is important, as the location of the low pressure around the landslide relative to the wave field acts to reinforce or suppress the waves above. This has a substantial effect on the increase or decrease in wave potential energy. When the low pressure acts to draw down a wave trough, the wave potential energy increases. When the low pressure is below a wave crest, it acts to suppress the crest amplitude, leading to an overall decrease in wave potential energy. Measurements of the efficiency of energy transfer from the landslide to the wave field show that the ratio of maximum wave potential energy to maximum landslide kinetic energy is between 0.028 and 0.138, and tends to increase for shallower initial landslide submergences and heavier specific gravities. The ratio of maximum wave potential energy to maximum landslide potential energy ranges between 0.011 and 0.059 and tends to be greater for shallower initial submergences. For two experimental configurations the ratio of maximum wave potential energy to maximum fluid kinetic energy is estimated to be 0.435 and 0.588. The wave trough initially generated above the rear end of the landslide propagates in both onshore and offshore directions. The onshore-propagating trough causes a large initial draw-down at the shore. The magnitude of the maximum draw-down is related to the maximum amplitude of the offshore-propagating first wave trough. A wave crest generated by the landslide as it decelerates at the bottom of the slope causes the maximum wave run-up observed at the shore. A semi-analytical model, based on inviscid and irrotational theory, is used to investigate the wave generation process of a moving submerged object in a constant depth channel. The simplified geometry allows a variety of phenomena, observed during the experimental tests, to be investigated further in a more controlled setting. The variations in the growth, magnitude, and decay of energy as a function of time is due the interaction of the pressure distribution surrounding the moving slider with the wave field, in particular, the leading crest and trough. The largest energy transfer between slider kinetic energy and wave potential energy occurs when there is prolonged interaction between the slider's low pressure region and the leading wave trough. The generation of onshore propagating waves by a decelerating landslide is confirmed, and the magnitude of the maximum wave run-up is found to be dependent on the magnitude of the slider deceleration. The model also shows that slides with Froude number close to unity convert substantial amounts of energy into offshore propagating waves. The onshore propagating wave potential energy is not as sensitive to Froude number. A further result from the model simulations is that the specific shape of the slider has only a minor influence on the wave response, provided the slider's length and area are known. A boundary element model, based on inviscid and irrotational theory, is used to simulate the laboratory experiments. Model predictions of the wave field are generally accurate, particularly the magnitude and range of wave amplitudes within the wave packet, the arrival time of the wave group, the amplitude of the run-up and run-down at the shore, the time the maximum run-down occurs, and the form and magnitude of the wave potential energy time history. The ratios of maximum wave potential energy to maximum slider kinetic energy are predicted to within ± 29%. The model predictions of the crest arrival times are within 3.6% of the measured times. The inability of the inviscid and irrotational model to simulate the flow separation and wake motions lead to a 45% under prediction of the maximum fluid kinetic energy. Both the semi-analytical and BEM models highlight the need for the correct specification of initial slider accelerations in numerical simulations in order to accurately predict the wave energy.
358

Ethical investing - why not? : An evaluation of financial performance of ethical indexes in comparison to conventional indexes

Mironova, Anastasia, Kynäs, Lovisa January 2012 (has links)
Problem: Do ethical investments perform better than conventional investments? Purpose: To evaluate whether Shariah-compliant indexes and/or socially responsible indexes can improve financial performance of an investment portfolio. Sub-problem: What kind of relationship exists between socially responsible investments and faith-based investments, represented by Shariah-compliant investments? Sub-purpose: To discover how two types of ethical investments, socially-responsible and Shariah-compliant, are related. Method: Quantitative study, covering three types of investment styles of four index families during the period from 2000 until 2011. Financial performance evaluation through the Sharpe ratio, Treynor ratio and Jensen’s alpha. Conclusions: Conventional, socially responsible, and Shariah-compliant indexes do not have any significant differences in financial performance on a global basis. However, Shariah-compliant indexes could slightly over-perform conventional and socially responsible indexes during financial downturns. In the same time socially responsible indexes were noticed to be the most volatile during the whole period of study, to compare with conventional and Shariah-compliant. Regarding relationships, high correlations were found between ethical indexes, as well as between ethical and conventional indexes.
359

Une Approche Générique pour la Sélection d'Outils de Découverte de Correspondances entre Schémas

Duchateau, Fabien 20 November 2009 (has links) (PDF)
L'interopérabilité entre applications et les passerelles entre différentes sources de don- nées sont devenues des enjeux cruciaux pour permettre des échanges d'informations op- timaux. Cependant, certains processus nécessaires à cette intégration ne peuvent pas être complétement automatisés à cause de leur complexité. L'un de ces processus, la mise en correspondance de schémas, est maintenant étudié depuis de nombreuses années. Il s'attaque au problème de la découverte de correspondances sémantiques entre éléments de différentes sources de données, mais il reste encore principalement effectué de manière manuelle. Par conséquent, le déploiement de larges systèmes de partage d'informations ne sera possible qu'en (semi-)automatisant ce processus de mise en correspondance. De nombreux outils de mise en correspondance de schémas ont été développés ces dernières décennies afin de découvrir automatiquement des mappings entre éléments de schémas. Cependant, ces outils accomplissent généralement des tâches de mise en cor- respondance pour des critères spécifiques, comme un scénario à large échelle ou la décou- verte de mappings complexes. Contrairement à la recherche sur l'alignement d'ontologies, il n'existe aucune plate-forme commune pour évaluer ces outils. Aussi la profusion d'outils de découverte de correspondances entre schémas, combinée aux deux problèmes évoqués précedemment, ne facilite pas, pour une utilisatrice, le choix d'un outil le plus ap- proprié pour découvrir des correspondances entre schémas. La première contribution de cette thèse consiste à proposer un outil d'évaluation, appelé XBenchMatch, pour mesurer les performances (en terme de qualité et de temps) des outils de découverte de corre- spondances entre schémas. Un corpus comprenant une dizaine de scénarios de mise en correspondance sont fournis avec XBenchMatch, chacun d'entre eux représentant un ou plusieurs critères relatif au processus de mise en correspondance de schémas. Nous avons également conçu et implémenté de nouvelles mesures pour évaluer la qualité des schémas intégrés et le post-effort de l'utilisateur. Cette étude des outils existants a permis une meilleure compréhension du processus de mise en correspondance de schémas. Le premier constat est que sans ressources ex- ternes telles que des dictionnaires ou des ontologies, ces outils ne sont généralement pas capables de découvrir des correspondances entre éléments possédant des étiquettes très différentes. Inversement, l'utilisation de ressources ne permet que rarement la découverte de correspondances entre éléments dont les étiquettes se ressemblent. Notre seconde con- tribution, BMatch, est un outil de découverte de correspondances entre schémas qui inclut une mesure de similarité structurelle afin de contrer ces problèmes. Nous démontrons en- suite de manière empirique les avantages et limites de notre approche. En effet, comme la plupart des outils de découverte de correspondances entre schémas, BMatch utilise une moyenne pondérée pour combiner plusieurs valeurs de similarité, ce qui implique une baisse de qualité et d'efficacité. De plus, la configuration des divers paramètres est une autre difficulté pour l'utilisatrice. Pour remédier à ces problèmes, notre outil MatchPlanner introduit une nouvelle méth- ode pour combiner des mesures de similarité au moyen d'arbres de décisions. Comme ces arbres peuvent être appris par apprentissage, les paramètres sont automatiquement config- urés et les mesures de similarité ne sont pas systématiquement appliquées. Nous montrons ainsi que notre approche améliore la qualité de découverte de correspondances entre sché- mas et les performances en terme de temps d'exécution par rapport aux outils existants. Enfin, nous laissons la possibilité à l'utilisatrice de spécifier sa préférence entre précision et rappel. Bien qu'équipés de configuration automatique de leurs paramètres, les outils de mise en correspondances de schémas ne sont pas encore suffisamment génériques pour obtenir des résultats qualitatifs acceptables pour une majorité de scénarios. C'est pourquoi nous avons étendu MatchPlanner en proposant une "fabrique d'outils" de découverte de corre- spondances entre schémas, nommée YAM (pour Yet Another Matcher). Cet outil apporte plus de flexibilité car il génère des outils de mise en correspondances à la carte pour un scénario donné. En effet, ces outils peuvent être considérés comme des classifieurs en apprentissage automatique, puisqu'ils classent des paires d'éléments de schémas comme étant pertinentes ou non en tant que mappings. Ainsi, le meilleur outil de mise en cor- respondance est construit et sélectionné parmi un large ensemble de classifieurs. Nous mesurons aussi l'impact sur la qualité lorsque l'utilisatrice fournit à l'outil des mappings experts ou lorsqu'elle indique une préférence entre précision et rappel.
360

Pilotage de la production décentralisée et des charges non conventionnelles dans le contexte Smart Grid et simulation hybride temps réel / Study of massive insertion of decentralized energy and unconventional load in Smart Grid context and hybrid real-time simulation

Mercier, Aurélien 28 September 2015 (has links)
Dans le domaine des réseaux de distribution d'électricité, l'ouverture du marché de l'énergie à la concurrence et l'insertion massive des génératrices décentralisées d'énergie de ces dernières années conduisent à une profonde modification du fonctionnement et de l'exploitation des réseaux. Dans ce contexte, des solutions de pilotage de la consommation et de la production doivent être apportées, afin de permettre au réseau actuel d'accueillir les nouvelles unités de production et les charges de demain, telles que les panneaux photovoltaïques, les micro-éoliennes, la cogénération, les véhicules électriques, les maisons intelligentes, etc. Ces pilotages permettent d'influencer la consommation et la production instantanées des utilisateurs du réseau. Ainsi, il devient possible d'agir sur la consommation de façon à lisser les pics ou synchroniser la demande aux périodes de forte production des énergies renouvelables. De la même façon, la production peut être pilotée pour participer aux services systèmes. Ces stratégies de pilotage, basées sur l'utilisation des nouvelles technologies de l'information et de la communication, ont pour objectifs d'éviter une dégradation de la qualité de l'onde de tension et une reconstruction complète du réseau de distribution, qui serait économiquement très couteuse. Ces travaux, intégrés au projet GreenLys, financé par l'agence française de l'environnement et de la maitrise de l'énergie, évaluent l'impact des génératrices décentralisées d'énergie et des véhicules électriques sur le réseau de distribution, puis développent des solutions de pilotage. Deux types de pilotage sont étudiés : le pilotage de la phase de raccordement d'une installation monophasée, puis le pilotage de la puissance réactive des génératrices décentralisées d'énergie. Ces pilotages sont développés en s'appuyant sur les nouveaux composants des réseaux électriques de demain, comme les compteurs intelligents. Dans une dernière partie, les stratégies de pilotage développées sont évaluées sur des équipements réels à partir d'une plateforme de simulation hybride temps réel. / In the electricity distribution network field, because of the electricity market opening and the large-scale insertion of dispersed generators (DJ) in these last years, the network undergoes radical modification in both operation and exploitation. In this context, some new integration solutions are invented in order to be able to connect the DJ, as photo-voltaic panels, micro wind turbines, cogeneration units, etc, and the new loads, as electric vehicles and smart home, without reduce the voltage wave quality or involve a very expensive power systems reinforcement. The objectives of those solutions are to influence the consumer consumption in order to reduce the peak consumption level and shift the consumption on the high renewable production period, and control the DJ output to participate to the service system. The new information and communication technologies (NICT) are strongly used in the development of those control strategies. This PhD work is including in the French project GreenLys supported by the French environment and energy management agency. GreenLys is a 4 years project focus on the development of a real scale Smart Grid in the two French cities Lyon and Grenoble. As a first step, this work evaluating the impact of the DJ and the electric vehicle on the distribution network. From the result of this impact study, two types of decentralized control strategies are investigated. The first one is focus on the phase connection. Since the majority of consumers and DJ connected on the distribution grid are single phases, methods allowing to choose the best phase connection are study. The second one is focus on new DJ reactive power control strategies. In the last part, the strategies are evaluated on a Power Hardware In the Loop simulation and real solar inverter.

Page generated in 0.0316 seconds