• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1465
  • 334
  • 197
  • 155
  • 107
  • 76
  • 72
  • 53
  • 44
  • 41
  • 19
  • 15
  • 15
  • 13
  • 13
  • Tagged with
  • 2949
  • 915
  • 332
  • 317
  • 296
  • 294
  • 291
  • 231
  • 210
  • 198
  • 197
  • 195
  • 194
  • 178
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Nouvelle méthodologie de synthèse de lois de commande tolérante aux fautes garantissant la fiabilité des systèmes / New Methodology for Active Fault Tolerant Control Design with Respect to System Reliability

Khelassi, Ahmed 11 July 2011 (has links)
Les travaux développés dans ce mémoire de thèse portent sur la contribution à une méthodologie de synthèse de lois de commande tolérante aux fautes garantissant la fiabilité des systèmes. Cette nouvelle méthodologie nécessite l'adaptation des différents outils de caractérisation de la fiabilité avec la théorie de la commande. L'intégration explicite de l'aspect charge dans les lois modélisant la fiabilité en ligne est considérée. Une première partie des travaux est consacrée à la reconfigurabilité des systèmes tolérants aux fautes. Une analyse de reconfigurabilité en présence de défauts basée sur la consommation d'énergie ainsi que des objectifs liés à la fiabilité globale du système sont proposés. Un indice de reconfigurabilité est proposé définissant les limites fonctionnelles d'un système commandé en ligne en fonction de la sévérité des défauts et de la dégradation des actionneurs en terme de fiabilité. Dans la deuxième partie, le problème d'allocation et ré-allocation de la commande est considéré. Des solutions sont développées tenant compte de l'état de dégradation et du vieillissement des actionneurs. Les entrées de commande sont attribuées au système en tenant compte de la fiabilité des actionneurs ainsi que les éventuels défauts. Des indicateurs de fiabilité sont proposés et intégrés dans la solution du problème d'allocation et ré-allocation de la commande. La dernière partie est entièrement consacrée à la synthèse d'une loi de commande tolérante aux fautes garantissant la fiabilité globale du système. Une procédure d'analyse de fiabilité des systèmes commandés en ligne est proposée en se basant sur une étude de sensibilité et de criticité des actionneurs. Ainsi, une méthode de commande tolérante aux fautes en tenant compte de la criticité des actionneurs est synthétisée sous une formulation LMI / The works developed in this thesis deal with the active fault tolerant control design incorporating actuators reliability. This new methodology requires the adaptation of the reliability analysis tools with the system control field. The explicit integration of load in the actuators reliability models is considered. First, the reconfigurability analysis of fault tolerant control systems is treated. A reliable reconfigurability analysis based on the energy consumption with respect to overall system reliability is presented. A reconfigurability index which defines the functional limitation of the system is proposed based on fault severity and actuators reliability degradation. The second part of the developed works is devoted to control allocation and re-allocation. Two approaches of control re-allocation are proposed by taking into consideration actuator degradation health. The control inputs are applied to the system with respect to actuators reliability and faults. The third part contributes to a fault tolerant controller design incorporating actuator criticality. A sensitivity analysis of the overall system reliability and criticality indicator are proposed. A new method of active fault tolerant control is developed with Linear Matrix Inequality (LMI) formulation based on actuator criticality
272

How contractual risk allocation provisions of oil and gas contracts have been, or may be, interpreted by an English court : a case study of some model offshore drilling rig contracts developed in the United Kingdom, Canada and the United States of America

Ofoegbu, Kelechi January 2018 (has links)
This study is an examination of how English courts have approached, or are likely to approach - and therefore, the effectiveness of - attempts by the parties to oil and gas contracts to allocate risks arising from the activities which form the subject matter of their respective contracts inter se. The study utilises petroleum industry standard form offshore drilling contracts in the United Kingdom, Canada and the United States of America as the context for this analysis, and examines the risks associated with drilling and other incidental operations, in the light of catastrophic events such as the Macondo disaster in the Gulf of Mexico and the Montara disaster in the Timor Sea. Drawing from the Economic Theory of Law espoused by Richard Posner, which correlates market behaviour, resource allocation and the legal system, and so conceptualises risk from a cost and utility perspective, the study will show that it is actually the economic consequences of the occurrence of an event that are being allocated, and that the entire notion of risk allocation is a determination of how the economic cost of the occurrence of the particular consequence will be borne by the parties to the contract. The study will conclude with a comparative analysis of risk allocation in the different model contracts, and an opinion on the success/effectiveness of the model contracts, as tools used by parties for risk allocation inter se, in response to the challenges created by legislative and judicial intervention. Justification for this opinion will be given, with reference to relevant case law and statutes in the different jurisdictions. Recommendations will be made on how the risk allocation structure can be improved, either by reference to other approaches the parties could adopt, or by clarifying ambiguities in the current approach (where applicable), and proposing a balance in the instances in which, from the study's perspective, the allocation formula is skewed, either due to the imbalance of power between the parties or by the interference of external forces such as the courts and legislature.
273

Capacity allocation mechanisms for grid environments

Gardfjäll, Peter January 2006 (has links)
<p>During the past decade, Grid computing has gained popularity as a means to build powerful computing infrastructures by aggregating distributed computing capacity. Grid technology allows computing resources that belong to different organizations to be integrated into a single unified system image – a Grid. As such, Grid technology constitutes a key enabler of large-scale, crossorganizational sharing of computing resources. An important objective for the Virtual Organizations (VOs) that result from such sharing is to tame the distributed capacity of the Grid in order to manage it and make fair and efficient use of the pooled computing resources.</p><p>Most Grids to date have, however, been completely unregulated, essentially serving as a “source of free CPU cycles” for authorized Grid users. Whenever unrestricted access is admitted to a shared resource there is a risk of overexploitation and degradation of the common resource, a phenomenon often referred to as “the tragedy of the commons”. This thesis addresses this problem by presenting two complementary Grid capacity allocation systems that allow the aggregate computing capacity of a Grid to be divided between users in order to protect the Grid from overuse while delivering fair service that satisfies the individual computational needs of different user groups.</p><p>These two Grid capacity allocation mechanisms constitute the core contribution of this thesis. The first mechanism, the SweGrid Accounting System (SGAS), addresses the need for coordinated soft, real-time quota enforcement across Grid sites. The SGAS project was an early adopter of the serviceoriented principles that are now common practice in the Grid community, and the system has been tested in the Swegrid production environment. Furthermore, SGAS has been included in the Globus Toolkit, the de-facto standard Grid middleware toolkit. SGAS employs a credit-based allocation model where research projects are granted quota allowances that can be spent across the Grid resources, which charge users for their resource consumption. This enforcement of usage limits thus produces real-time overuse protection.</p><p>The second approach, employed by the Fair Share Grid (FSGrid) system, uses a share-based allocation model where project entitlements are expressed in terms of hierarchical share policies that logically divide the Grid capacity between user groups. By coordinating local job scheduling to maintain these global capacity shares, the Grid resources collectively strive to schedule users for a “share of the Grid”. We refer to this cooperative scheduling model as decentralized Grid-wide fairshare scheduling.</p>
274

Improving locality with dynamic memory allocation

Jula, Alin Narcis 15 May 2009 (has links)
Dynamic memory allocators are a determining factor of an application's performanceand have the opportunity to improve a major performance bottleneck ontoday's computer hardware: data locality. To approach this problem, a memoryallocator must rst oer strategies that allow the locality problem to be addressed.However, while focusing on locality, an allocator must also not ignore the existing constraintsof allocation speed and fragmentation, which further complicate its design. Inorder for a locality improving technique to be successfully employed in today's largecode applications, its integration needs to be automatic, without user intervention.The alternative, manual integration, is not a tractable solution.In this dissertation we develop three novel memory allocators that explore dierentallocation strategies that enhance an application's locality. We conduct the rststudy that shows that allocation speed, fragmentation and locality improving goalsare antagonistic. We develop an automatic method that supplies allocation hintsfrom C++ STL containers to their allocators. This method allows applications tobenet from locality improving techniques at the cost of a simple re-compilation. Weconduct the rst study that quanties the eect of allocation hints on performance,and show that an allocator with high locality of reference can be as competitive asone using an application's spatial feedback.To further allow dynamic memory allocation to improve an application's performance,new and non-traditional strategies need be explored. We develop a generic software tool that allows users to examine unconventional strategies. The tool allowsusers not only to focus on allocation strategies rather than their implementation, butalso to compare and contrast various approaches.
275

Fair Treatment of Multicast Sessions and Their Receivers : Incentives for more efficient bandwidth utilization

Österberg, Patrik January 2007 (has links)
Tjänster för strömmad media stiger kraftigt i popularitet, samtidigt som utbudet av denna typ av tjänster ökar. Internet protocol television (IPTV) med standardupplösning levereras redan till många hem, och högupplöst IPTV kommer att bli vanligt inom en relativt snar framtid. Mer avancerade tjänster, som tredimensionell TV och TV med fritt valbara vyer, står sedan på tur. Strömmad video är av naturen väldigt bandbreddskrävande, och denna utveckling kommer därför att sätta den befintliga nätverksinfrastrukturen på prov. Multicast är mer bandbreddseffektivt än unicast för scenarion där många mottagare samtidigt är intresserade av samma data, vilket är fallet med populärt direktsänt material. Anledningen är att mottagarna av multicast-sessioner delar på resurserna via ett gemensamt transmissionsträd, där ingen data sänds mer än en gång över någon gren. Användningen av multicast kan därför generera stora besparingar av bandbredd. Internetleverantörerna har dock inga riktigt starka skäl för att stödja multicast, vilket medfört att spridningen varit långsam. Vi föreslår att multicast-sessioner tilldelas mer bandbredd när det uppstår trafikstockningar i näten. Fördelningen baseras på antalet mottagare och datatakten som de erhåller, eftersom det är det som avgör graden av resursdelning. Vi anser att det är rättvist att ta hänsyn till detta, och kallar därför den föreslagna bandbreddsfördelningen multicast-favorable max-min fair. Vidare så presenteras två bandbreddstilldelningspolicyer som använder sig av olika mängd återkoppling för att uppnå fördelningar som ligger förhållandevis nära den föreslagna. Vi föreslår även två mekanismer för kostnadsallokering, vilka bygger på antagandet att kostnaden för dataöverföring ska täckas av mottagarna. De föreslagna mekanismerna fördelar kostnaderna mellan mottagarna baserat på deras andel av resursutnyttjandet, vilket generellt är fördelaktigt för multicast-mottagare. De två mekanismerna för kostnadsallokering skiljer sig åt genom att den ena eftersträvar optimalt rättvis fördelning av kostnaderna, medan den andra kan ge rabatt till vissa mottagare. Rabatten möjliggör större grupper med mottagare, vilket även kan reducera kostnaderna för icke rabatterade mottagare. Förslagen gör multicast mer attraktivt för användarna av strömmad media. Om förslagen implementerades i nätverk med multicast-stöd så skulle övriga Internetleverantörer bli tvungna att stödja multicast för att vara konkurrenskraftiga. / Media-streaming services are rapidly gaining in popularity, and new ones are knocking on the door. Standard-definition Internet protocol television (IPTV) has already entered many living rooms, and high-definition IPTV will become common property in the not too distant future. Then even more advanced and resource-demanding services, such as three-dimensional and free-view TV, are next in line. Video streaming is by nature extremely bandwidth intensive, and this development will put the existing network infrastructure to the test. In scenarios where many receivers are simultaneously interested in the same data, which is the case with popular live content, multicast transmission is more bandwidth efficient than unicast. The reason is that the receivers of a multicast session share the resources through a common transmission tree where data are only transmitted once along any branch. The use of multicast transmission can therefore yield huge bandwidth savings. There are however no really strong incentives for the Internet service providers (ISPs) to support multicast transmission, and the deployment has consequently been slow. We propose that more bandwidth is allocated to multicast flows in the case of network congestion. The ratio is based upon the number of receivers and the bitrate that they are able to obtain, since this is what determines the degree of resource sharing. We believe that it is fair to take this into account, and accordingly call the proposed allocation multicast-favorable max-min fair. Further, we present two bandwidth-allocation policies that utilize different amount of feedback to perform allocations that are reasonable close to be multicast-favorable max-min fair. We also propose two cost-allocation mechanisms that build upon the assumption that the cost for data transmission should be covered by the receivers. The mechanisms charge the receivers based on their share of the resources usage, which in general is favorable to multicast receivers. The two cost-allocation mechanisms differ in that one strives for optimum fair cost allocations, whereas the other might give discounts to some receivers. The discounts facilitate larger groups of receivers, which can provide cheaper services for the non-discounted receivers as well. The proposals make multicast transmission more attractive to the users of media-streaming services. If the proposals were implemented in multicast-enabled networks, the rest of the ISPs would be forced to support multicast, to stay competitive.
276

A Convex Decomposition Perspective on Dynamic Bandwidth Allocation and Applications

Morell Pérez, Antoni 23 September 2008 (has links)
Tradicionalment, les tècniques d'accés múltiple en sistemes de comunicacions multi-usuari han estat desenvolupades o bé orientades a la connexió o bé orientades al tràfic. En el primer cas, l'objectiu és establir tants canals ortogonals com sigui possible per tal d'assignar-los als usuaris. Aquesta idea va motivar el disseny de les estratègies més conegudes, com són FDMA, TDMA i CDMA. Per altra banda, però, els mètodes d'accés aleatori que s'iniciaren amb el conegut ALOHA pretenen compartir estadísticament un mateix medi de comunicació aprofitant la necessitat de transmetre la informació a ràfegues que s'origina en les xarxes de dades. Així, molts dels actuals sistemes es poden encabir dins d'aquest esquema si a més a més, tenim en compte combinacions d'aquestes. No obstant, sistemes moderns com el DVB-RCS en l'entorn de comunicacions digitals per satèl·lit o el WiMAX en l'accés terrestre de banda ampla implementen mecanismes de petició i assignació de recursos, els quals requereixen una gestió dinàmica d'aquests en el sistema (és el que s'anomena distribució dinàmica de l'amplada de banda en un sentit ampli).L'anterior concepte inclou múltiples variables, configuracions i protocols tant de capa física com de capa d'enllaç. En aquesta tesi s'exploren en primer lloc les bases matemàtiques que permeten coordinar les diferents capes de la divisió OSI dels sistemes i els distints nodes dins la xarxa. Ens referim a les tècniques de descomposició focalitzades en problemes d'optimització convexa, els quals han aportat, durant els últims anys, solucions elegants a molts problemes dins dels camps del processament del senyal i les comunicacions. Revisarem els esquemes coneguts i proposarem una nova metodologia. Acte seguit, es comparen les diferents possibilitats de descomposició, cadascuna de les quals implica diferents maneres d'establir la senyalització. A la pràctica, són aquestes diverses opcions de descomposició les que infereixen les diferents interaccions entre capes o els protocols de control entre elements de la xarxa. Els resultats en quant a nombre d'iteracions requerides per a convergir a la solució òptima són favorables al nou mètode proposat, la qual cosa obra noves línies d'investigació.Finalment, es contribueix també amb dos exemples d'aplicació, en DVB-RCS i en WiMAX. Formulem el problema de gestió de recursos resultant de l'accés múltiple dissenyat per cadascun dels sistemes com un problema de maximització d'utilitat de xarxa (conegut com a NUM en la bibliografia) i el solucionarem aplicant les tècniques anteriors. L'objectiu serà garantir l'equitativitat entre els usuaris i preservar, al mateix temps, la seva qualitat de servei. Per aconseguir-ho, cal seleccionar funcions d'utilitat adequades que permetin balancejar l'assignació de recursos cap als serveis més prioritaris. Mostrarem que en els escenaris considerats, l'ús del mètode proposat comporta guanys significatius ja que requereix menys iteracions en el procés (i per tant, menys senyalització) o bé menys temps de càlcul en un enfoc centralitzat (que es tradueix en la possibilitat d'incloure més usuaris). També es mostren els avantatges de considerar interaccions entre capes, ja que es poden ajustar els paràmetres de capa física per tal d'afavorir els tràfics més prioritaris o bé extreure els requeriments de servei de valors típicament disponibles en capes superiors.En general, la implementació eficient de tècniques de gestió dinàmica de recursos treballant en l'accés múltiple dels sistemes pot aportar guanys significatius però implica establir una bona coordinació entre capes i elements de xarxa. L'eina matemàtica que ho possibilita són les tècniques de descomposició. Cada nou escenari i sistema introdueix un nou repte d'optimització i la capacitat que tinguem de coordinar totes les variables del sistema cap al punt òptim en determinarà el rendiment global. / Traditionally, multiple access schemes in multi-user communications systems have been designed either connection-oriented or traffic-oriented. In the first ones, the goal was to provide as many orthogonal channels as possible, each one serving a different connection. That is the motivation of the so-called FDMA, TDMA and CDMA solutions. On the other hand, random access techniques, which started with the so-called ALOHA protocol, aim to statistically multiplex a shared communication medium by means of exploiting the random and bursty nature of transmission needs in data networks. Most of the multiple access solutions can be interpreted according to that classification or as a combination of those approaches. Notwithstanding, modern systems, such as the digital satellite communications standard DVB-RCS or the broadband wireless access WiMAX, have implemented a multiple access technique where users request for transmission opportunities and receive grants from the network, therefore requiring dynamic bandwidth allocation techniques. The concept of dynamic bandwidth allocation is wide and involves a number of physical and link layer variables, configurations and protocols. In this Ph.D. dissertation we first explore the mathematical foundation that is required to coordinate the distinct layers of the OSI protocol stack and the distinct nodes within the network. We talk about decomposition techniques focused on the resolution of convex programs, which have elegantly solved many problems in the signal processing and communications fields during the last years. Known schemes are reviewed and a novel decomposition methodology is proposed. Thereafter, we compare the four resulting strategies, each one having its own particular signalling needs, which results in distinct cross-layer interactions or signalling protocols at implementation level. The results in terms of iterations required to converge are favourable to the proposed method, thus opening a new line of research.Finally, we contribute with two practical application examples in the DVB-RCS and WiMAX systems. First, we formulate the dynamic bandwidth allocation problem that is derived from the multiple access schemes of both systems. Thereafter, the resulting Network Utility Maximization (NUM) based problem is solved by means of the previous decomposition mechanisms. The goal is to guarantee fairness among the users at the same time that Quality of Service (QoS) is preserved. In order to achieve that, we choose adequate utility functions that allow to balance the allocation towards the most priority traffic flows under a common fairness framework. We show that in the scenarios considered, the novel proposed coupled-decomposition method reports significant gains since it reduces significantly the iterations required (less iterations implies less signalling) or it reduces the time needed to obtain the optimal allocation when it is centrally computed (more users can be managed). We further show the advantages of cross-layer interactions with the physical and upper layers, which allow to benefit from more favourable adjustments of the transmission parameters and to consider the QoS requirements at upper layers. In general, an efficient implementation of dynamic bandwidth allocation techniques in Demand Assignment Multiple Access (DAMA) schemes may report significant performance gains but it requires proper coordination among system layers and network nodes, which is attained thanks to decomposition techniques. Each new scenario and system adds another optimization challenge and, as far as we are able to coordinate all the variables in the system towards that optimal point, the highest will be the revenue.
277

Improving locality with dynamic memory allocation

Jula, Alin Narcis 15 May 2009 (has links)
Dynamic memory allocators are a determining factor of an application's performanceand have the opportunity to improve a major performance bottleneck ontoday's computer hardware: data locality. To approach this problem, a memoryallocator must rst oer strategies that allow the locality problem to be addressed.However, while focusing on locality, an allocator must also not ignore the existing constraintsof allocation speed and fragmentation, which further complicate its design. Inorder for a locality improving technique to be successfully employed in today's largecode applications, its integration needs to be automatic, without user intervention.The alternative, manual integration, is not a tractable solution.In this dissertation we develop three novel memory allocators that explore dierentallocation strategies that enhance an application's locality. We conduct the rststudy that shows that allocation speed, fragmentation and locality improving goalsare antagonistic. We develop an automatic method that supplies allocation hintsfrom C++ STL containers to their allocators. This method allows applications tobenet from locality improving techniques at the cost of a simple re-compilation. Weconduct the rst study that quanties the eect of allocation hints on performance,and show that an allocator with high locality of reference can be as competitive asone using an application's spatial feedback.To further allow dynamic memory allocation to improve an application's performance,new and non-traditional strategies need be explored. We develop a generic software tool that allows users to examine unconventional strategies. The tool allowsusers not only to focus on allocation strategies rather than their implementation, butalso to compare and contrast various approaches.
278

Comparative Study on the Organization and Management Systems of ROC's Armed Forces TV Centers

Chen, Chih-peng 29 August 2005 (has links)
This study is aimed at understanding and comparing the organizational management system and performance among the TV center of Republic Of China's armed forces, and collects info of the multi-media units of the USAF and educational program production unit of the Open University of Kaohsiung to get a picture of the current status of those organizations pertaining to multi-media program production, and serves as a reference for the future development of the TV centers of ROC's armed forces. The research approach used there is characteristic-oriented. TV centers of the Army, Navy, and Air Force are the study objects. Through interview with members of each service's TV center and the Open Universality of Kaohsiung to gain more detail information to be used as reference. From the result to understand the organization, manpower, equipment investment, production quantities and contents, then further the study on those multi-media units of USAF and the Open Universality of Kaohsiung to induce two discussion topics:merging and outsourcing. The first issue is to discuss whether it's practicable for merging the TV centers of the services. The second issue is to find whether it's feasible for the programs to be outsourced. Finally I'd like to bring up the limitation, contribution and suggestion of this study.
279

Land use change through market dynamics : a Microsimulation of land development, the bidding process, and location choices of households and firms

Zhou, Bin, 1977- 13 March 2014 (has links)
Rapid urbanization is a pressing issue for planners, policymakers, transportation engineers, air quality modelers and others. Due to significant environmental, traffic and other impacts, the process of land development highlights a need for land use models with behavioral foundations. Such models seek to anticipate future settlement and transport patterns, helping ensure effective public and private investment decisions and policymaking, to accommodate growth while mitigating environmental impacts and other concerns. A variety of land use models now exist, but a market-based model with sufficient spatial resolution and defensible behavioral foundations remains elusive. This dissertation addresses this goal by developing and applying such a model. Real estate markets involve numerous interactive agents and real estate with a great level of heterogeneity. In the absence of tractable theory for realistic real estate markets, this research takes a “bottom-up” approach and simulates the behavior of tens of thousands of individual agents based on actual data. Both the supply and demand sides of the market are modeled explicitly, with endogenously determined property prices and land use patterns (including distributions of households and firms). Notions of competition were used to simulate price adjustment, and market-clearing prices were obtained in an iterative fashion. When real estate markets reach equilibrium, each agent is aligned with a single, utility-maximizing location and each allocated location is occupied by the highest bidding agent(s). This approach helps ensure a form of local equilibrium (subject to imperfect information on the part of most agents) along with useroptimal land allocation patterns. The model system was applied to the City of Austin and its extraterritorial jurisdiction. Multiple scenarios reveal the strengths and limitations of the market simulation and available data sets. While equilibrium prices in forecast years are generally lower than observed or expected, the spatial distributions of property values, new development, and individual agents are reasonable. Longer-term forecasts were generated to test the performance the model system. The forecasted households and firm distributions in year 2020 are consistent with expectations, but property prices are forecasted to experience noticeable changes. The model dynamics may be much improved by more appropriate maximum bid prices for each property. More importantly, this work demonstrates that microsimulation of real estate markets and the spatial allocation of households and firms is a viable pursuit. Such approaches herald a new wave of land use forecasting opportunities, for more effective policymaking and planning. / text
280

Wind Allocation Methods for Improving Energy Security in Residential Space and Hot Water Heating

Lakshminarayanan, Harisubramanian 22 August 2012 (has links)
Worldwide, wind energy added to the energy mix of electricity suppliers may be seen as way of improving energy security and reducing greenhouse gas emissions. However, due to wind's variability wind electricity cannot be used to meet demands which require a continuous supply of electricity. One solution to the variability problem is to adopt services that are capable of storing energy for use at a later time. Five new wind-allocation methods are considered to maximize its use of wind-electricity while at the same time reducing emissions. Simulations results, show that households benefit from an annual savings of about 30% to 36% with an estimated payback period ranging between 3.5 and 5.5 years. Emissions reduction in the off-peak scenarios is between 32% and 35% and about 86% in the anytime scenario. Heating demands satisfied ranges between 75% and 96% and total wind used for heating is between 3%-4%.

Page generated in 0.1009 seconds