• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 20
  • 13
  • 8
  • 5
  • 2
  • 1
  • Tagged with
  • 158
  • 158
  • 158
  • 46
  • 35
  • 33
  • 27
  • 26
  • 25
  • 24
  • 24
  • 23
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A MULTI-FUNCTIONAL PROVENANCE ARCHITECTURE: CHALLENGES AND SOLUTIONS

2013 December 1900 (has links)
In service-oriented environments, services are put together in the form of a workflow with the aim of distributed problem solving. Capturing the execution details of the services' transformations is a significant advantage of using workflows. These execution details, referred to as provenance information, are usually traced automatically and stored in provenance stores. Provenance data contains the data recorded by a workflow engine during a workflow execution. It identifies what data is passed between services, which services are involved, and how results are eventually generated for particular sets of input values. Provenance information is of great importance and has found its way through areas in computer science such as: Bioinformatics, database, social, sensor networks, etc. Current exploitation and application of provenance data is very limited as provenance systems started being developed for specific applications. Thus, applying learning and knowledge discovery methods to provenance data can provide rich and useful information on workflows and services. Therefore, in this work, the challenges with workflows and services are studied to discover the possibilities and benefits of providing solutions by using provenance data. A multifunctional architecture is presented which addresses the workflow and service issues by exploiting provenance data. These challenges include workflow composition, abstract workflow selection, refinement, evaluation, and graph model extraction. The specific contribution of the proposed architecture is its novelty in providing a basis for taking advantage of the previous execution details of services and workflows along with artificial intelligence and knowledge management techniques to resolve the major challenges regarding workflows. The presented architecture is application-independent and could be deployed in any area. The requirements for such an architecture along with its building components are discussed. Furthermore, the responsibility of the components, related works and the implementation details of the architecture along with each component are presented.
72

Computational Aspects of Learning, Reasoning, and Deciding

Zanuttini, Bruno 27 June 2011 (has links) (PDF)
We present results and research projects about the computational aspects of classical problems in Artificial Intelligence. We are interested in the setting of agents able to describe their environment through a possibly huge number of Boolean descriptors, and to act upon this environment. The typical applications of this kind of studies are to the design of autonomous robots (for exploring unknown zones, for instance) or of software assistants (for scheduling, for instance). The ultimate goal of research in this domain is the design of agents able to learn autonomously, by learning and interacting with their environment (including human users), also able to reason for producing new pieces of knowledge, for explaining observed phenomena, and finally, able to decide on which action to take at any moment, in a rational fashion. Ideally, such agents will be fast, efficient as soon as they start to interact with their environment, they will improve their behavior as time goes by, and they will be able to communicate naturally with humans. Among the numerous research questions raised by these objectives, we are especially interested in concept and preference learning, in reinforcement learning, in planning, and in some underlying problems in complexity theory. A particular attention is paid to interaction with humans and to huge numbers of descriptors of the environment, as are necessary in real-world applications.
73

Analysis of Hybrid CSMA/CA-TDMA Channel Access Schemes with Application to Wireless Sensor Networks

Shrestha, Bharat 27 November 2013 (has links)
A wireless sensor network consists of a number of sensor devices and coordinator(s) or sink(s). A coordinator collects the sensed data from the sensor devices for further processing. In such networks, sensor devices are generally powered by batteries. Since wireless transmission of packets consumes significant amount of energy, it is important for a network to adopt a medium access control (MAC) technology which is energy efficient and satisfies the communication performance requirements. Carrier sense multiple access with collision avoidance (CSMA/CA), which is a popular access technique because of its simplicity, flexibility and robustness, suffers poor throughput and energy inefficiency performance in wireless sensor networks. On the other hand, time division multiple access (TDMA) is a collision free and delay bounded access technique but suffers from the scalability problem. For this reason, this thesis focuses on design and analysis of hybrid channel access schemes which combine the strengths of both the CSMA/CA and TDMA schemes. In a hybrid CSMA/CA-TDMA scheme, the use of the CSMA/CA period and the TDMA period can be optimized to enhance the communication performance in the network. If such a hybrid channel access scheme is not designed properly, high congestion during the CSMA/CA period and wastage of bandwidth during the TDMA period result in poor communication performance in terms of throughput and energy efficiency. To address this issue, distributed and centralized channel access schemes are proposed to regulate the activities (such as transmitting, receiving, idling and going into low power mode) of the sensor devices. This regulation during the CSMA/CA period and allocation of TDMA slots reduce traffic congestion and thus improve the network performance. In this thesis work, time slot allocation methods in hybrid CSMA/CA-TDMA schemes are also proposed and analyzed to improve the network performance. Finally, such hybrid CSMA/CA-TDMA schemes are used in a cellular layout model for the multihop wireless sensor network to mitigate the hidden terminal collision problem.
74

Achieving Quality of Service Guarantees for Delay Sensitive Applications in Wireless Networks

Abedini, Navid 2012 August 1900 (has links)
In the past few years, we have witnessed the continuous growth in popularity of delay-sensitive applications. Applications like live video streaming, multimedia conferencing, VoIP and online gaming account for a major part of Internet traffic these days. It is also predicted that this trend will continue in the coming years. This emphasizes the significance of developing efficient scheduling algorithms in communication networks with guaranteed low delay performance. In our work, we try to address the delay issue in some major instances of wireless communication networks. First, we study a wireless content distribution network (CDN), in which the requests for the content may have service deadlines. Our wireless CDN consists of a media vault that hosts all the content in the system and a number of local servers (base stations), each having a cache for temporarily storing a subset of the content. There are two major questions associated with this framework: (i) content caching: which content should be loaded in each cache? and (ii) wireless network scheduling: how to appropriately schedule the transmissions from wireless servers? Using ideas from queuing theory, we develop provably optimal algorithms to jointly solve the caching and scheduling problems. Next, we focus on wireless relay networks. It is well accepted that network coding can enhance the performance of these networks by exploiting the broadcast nature of the wireless medium. This improvement is usually evaluated in terms of the number of required transmissions for delivering flow packets to their destinations. In this work, we study the effect of delay on the performance of network coding by characterizing a trade-off between latency and the performance gain achieved by employing network coding. More specifically, we associate a holding cost for delaying packets before delivery and a transmission cost for each broadcast transmission made by the relay node. Using a Markov decision process (MDP) argument, we prove a simple threshold-based policy is optimal in the sense of minimum long-run average cost. Finally, we analyze delay-sensitive applications in wireless peer-to-peer (P2P) networks. We consider a hybrid network which consists of (i) an expensive base station-to-peer (B2P) network with unicast transmissions, and (ii) a free broadcast P2P network. In such a framework, we study two popular applications: (a) a content distribution application with service deadlines, and (b) a multimedia live streaming application. In both problems, we utilize random linear network coding over finite fields to simplify the coordination of the transmissions. For these applications, we provide efficient algorithms to schedule the transmissions such that some quality of service (QoS) requirements are satisfied with the minimum cost of B2P usage. The algorithms are proven to be throughput optimal for sufficiently large field sizes and perform reasonably well for finite fields.
75

Topics in Online Markov Decision Processes

Guan, Peng January 2015 (has links)
<p>This dissertation describes sequential decision making problems in non-stationary environments. Online learning algorithms deal with non-stationary environments, but generally there is no notion of a dynamic state to model future impacts of past actions. State-based models are common in stochastic control settings, but well-known frameworks such as Markov decision processes (MDPs) assume a known stationary environment. In recent years, there has been a growing interest in fusing the above two important learning frameworks and considering an MDP setting in which the cost function is allowed to change arbitrarily over time. A number of online MDP algorithms have been designed to work under various assumptions about the dynamics of state transitions so far and provide performance guarantees, i.e. bounds on the regret defined as the performance gap between the total cost incurred by the learner and the total cost of the best available stationary policy that could have been chosen in hindsight. </p><p>However, most of the work in this area has been algorithmic: given a problem, one</p><p>would develop an algorithm almost from scratch and prove the performance guarantees on a case-by-case basis. Moreover, the presence of the state and the assumption of an arbitrarily varying environment complicate both the theoretical analysis and the development of computationally efficient methods. Another potential issue is that, by removing distributional assumptions about the mechanism generating the cost sequences, the existing methods have to consider the worst-case scenario, which may render their solutions too conservative in situations where the environment exhibits some degree of predictability. </p><p>This dissertation contributes several novel techniques to address the above challenges of the online MDP framework and opens up new research directions for online MDPs. </p><p>Our proposed general framework for deriving algorithms in the online MDP setting leads to a unifying view of existing methods and provides a general procedure for constructing new ones. Several new algorithms are developed and analyzed using this framework. We develop convex-analytical algorithms that take advantage of possible regularity of observed sequences, yet maintain the worst case performance guarantees. To further study the convex-analytic methods we applied above, we take a step back to consider the traditional MDP problem and extend the LP approach to MDPs by adding a relative entropy regularization term. A computationally efficient algorithm for this class of MDPs is constructed under mild assumptions on the state transition models. Two-player zero-sum stochastic games are also investigated in this dissertation as an important extension of the online MDP setting. In short, this dissertation provides in-depth analysis of the online MDP problem and answers several important questions in this field.</p> / Dissertation
76

Qualitative analysis of probabilistic synchronizing systems / Analyse qualitative des systèmes probabilistes synchronisants

Shirmohammadi, Mahsa 10 December 2014 (has links)
Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment. An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action, and the environment chooses a successor according to a fixed probability distribution.<p><p>There are two incomparable views on the behavior of an MDP, when the strategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability. The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.<p><p>Recently, MDPs have been viewed as generators of sequences of probability distributions over states, called the distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes, which intuitively requires that the probability mass accumulates in some (group of) state(s), possibly in limit. A probability distribution is p-synchronizing if the probability mass is at least p in some state, and a sequence of probability distributions is always, eventually, weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.<p><p>For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.<p><p>We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirement for optimal winning strategies.<p><p>As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far. The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of the emptiness and universality problems for all synchronizing languages in all winning modes.<p><p>We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results for alternating finite automata over a one-letter alphabet. In addition, we study different variants of synchronization for timed and weighted automata, as two instances of infinite-state systems. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
77

Stochastic Control Of Transmissions Over Multiaccess Fading Channels

Goyal, Munish 12 1900 (has links) (PDF)
No description available.
78

Smart grid-aware radio engineering in 5G mobile networks / Ingénierie radio orientée smart grids dans les réseaux mobiles 5G

Labidi, Wael 21 March 2019 (has links)
La demande en énergie dans les réseaux de téléphonie mobile augmente en raison de l’émergence de nouvelles technologies et de nouveaux services aux exigences de plus en plus élevées (débits de données, délais, etc.). Dans ce contexte, l'opérateur de réseau mobile (ORM) doit fournir d'avantage de ressources radio et de capacité de traitement dans son réseau, entraînant ainsi des coûts financiers plus élevés. L’ORM n’a pas d’autre choix que de mettre en œuvre des stratégies d’économie d’énergie sur plusieurs niveaux de son infrastructure, notamment au niveau du réseau d’accès radio (RAN).En parallèle, le réseau électrique devient plus intelligent, avec de nouvelles fonctionnalités pour équilibrer l'offre et la demande en faisant varier les prix de l'électricité, permettant ainsi à certains agrégateurs d'énergie de faire partie du processus d'approvisionnement et en signant des accords de réponse à la demande avec ses clients les plus important. Dans le contexte d'un réseau électrique intelligent et fiable, l'ORM, qui compte des milliers de evolved NodeB (eNB) répartis sur tout le pays, doit jouer un rôle majeur dans le réseau en agissant en tant que consommateur potentiel capable de vendre de l'électricité. Toutefois, dans les pays d'Afrique subsaharienne, le réseau peut ne pas être fiable, voire même inexistant, l'ORM n'a d'autre choix que de déployer une centrale électrique virtuelle (VPP) qui l'alimente partiellement ou totalement.Dans cette thèse, nous étudions les interactions entre l’opérateur de réseau et le réseau électrique, qu’il soit fiable ou non, dans les pays développés comme dans les pays en cours de développement. Nous étudions la gestion optimale de l'énergie à long et à court terme, dans le but de minimiser le coût total de possession (TCO) en énergie de l'opérateur par station de base, qui correspond à la somme de ses dépenses d'investissement (CAPEX) et de ses dépenses opérationnelles (OPEX), en assurant la satisfaction des besoins croissants en trafic de ses utilisateurs dans la cellule.L'étude à long terme nous permet de prendre des décisions d'investissement semestrielles pour le dimensionnement de la batterie et des sources énergies renouvelables, en tenant compte de la dégradation des performances des équipements, des prévisions de la croissance du trafic des utilisateurs et de l'évolution du marché de l'électricité sur une longue période de temps comptée en années.Dans le cas où elle est alimentée par un réseau intelligent fiable, la politique à court terme aide l’opérateur à définir quotidiennement une stratégie de gestion optimale de la batterie assurant l'arbitrage ou à le trading d’électricité tout en exploitant les fluctuations horaires des prix de l’électricité afin de minimiser la facture énergétique journalière de l'ORM tout en respectant certaines règles d'utilisation de ces équipements.Dans le cas d'un réseau électrique non fiable ou complètement inexistant, l'opérateur est alimenté par des sources hybrides couplant stockage (batteries), générateurs diesel, énergie solaire et le réseau électrique si ce dernier est opérationnel. Ici, nous définissons un ordre de priorité fixe sur l’utilisation de ces sources qui vise à étendre la durée de vie de la batterie et maintenir ses performances / The energy demand in mobile networks is increasing due to the emergence of new technologies and new services with higher requirements (data rates, delays, etc). In this context, the Mobile Network Operator (MNO) has to provide more radio and processing resources in its network leading for higher financial costs. The MNO has no choice but to implement energy saving strategies in all the parts of its infrastructure and especially at the Radio Access Network (RAN).At the same time, the electrical grid is getting smarter including new functionalities to balance supply and demand by varying the electricity prices, allowing some aggregators to be part of the supply process and signing demand response agreements with its clients. In the context of reliable smart grid, the MNO having thousands of evolved NodeB (eNB) spread over all the country, has to play major role in the grid by acting as a prosumer able to sell electricity. In African Sub-Saharan countries however, the grid may be not reliable or even non existent, the MNO has no choice but to deploy a Virtual Power Plant (VPP) and rely partially or totally on it.In this thesis, we study the interactions between the network operator and the grid either reliable or not in both developed and developing countries. We investigate both long term and short term optimal energy related management, with the aim of minimising the operator's Total Cost of Ownership (TCO) for energy per base station which is the sum of its Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) while satisfying the growing needs of its user traffic in the cell.The long term study enables us to make semestral based investment decisions for the battery and renewable energy sources dimensioning considering equipment performance degradation, predictions on users traffic growth and electricity market evolution over a long period of time counted in years.In the case of being powered by a reliable smart grid, the short term policy helps the operator to set on a daily basis, an optimal battery management strategy by performing electricity arbitrage or trading that takes advantage of the electricity prices hourly fluctuations in order to minimize the MNO daily energy bill while respecting some rules on the usage of its equipments.In the case of a non reliable or off-grid environment, the operator is powered by hybrid sources coupling storage, diesel generators, solar power and the grid if the latter is operational. Here, we define a fixed order of priority on the use of these sources that extends the battery lifetime and maintain its performance
79

A Study on Optimization Measurement Policies for Quality Control Improvements in Gene Therapy Manufacturing

January 2020 (has links)
abstract: With the increased demand for genetically modified T-cells in treating hematological malignancies, the need for an optimized measurement policy within the current good manufacturing practices for better quality control has grown greatly. There are several steps involved in manufacturing gene therapy. These steps are for the autologous-type gene therapy, in chronological order, are harvesting T-cells from the patient, activation of the cells (thawing the cryogenically frozen cells after transport to manufacturing center), viral vector transduction, Chimeric Antigen Receptor (CAR) attachment during T-cell expansion, then infusion into patient. The need for improved measurement heuristics within the transduction and expansion portions of the manufacturing process has reached an all-time high because of the costly nature of manufacturing the product, the high cycle time (approximately 14-28 days from activation to infusion), and the risk for external contamination during manufacturing that negatively impacts patients post infusion (such as illness and death). The main objective of this work is to investigate and improve measurement policies on the basis of quality control in the transduction/expansion bio-manufacturing processes. More specifically, this study addresses the issue of measuring yield within the transduction/expansion phases of gene therapy. To do so, it was decided to model the process as a Markov Decision Process where the decisions being made are optimally chosen to create an overall optimal measurement policy; for a set of predefined parameters. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2020
80

Spoken Dialogue System for Information Navigation based on Statistical Learning of Semantic and Dialogue Structure / 意味・対話構造の統計的学習に基づく情報案内のための音声対話システム

Yoshino, Koichiro 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18614号 / 情博第538号 / 新制||情||95(附属図書館) / 31514 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 黒橋 禎夫, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM

Page generated in 0.0918 seconds