• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 20
  • 13
  • 8
  • 5
  • 2
  • 1
  • Tagged with
  • 161
  • 161
  • 161
  • 48
  • 35
  • 33
  • 28
  • 26
  • 25
  • 24
  • 24
  • 23
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A Study on Optimization Measurement Policies for Quality Control Improvements in Gene Therapy Manufacturing

January 2020 (has links)
abstract: With the increased demand for genetically modified T-cells in treating hematological malignancies, the need for an optimized measurement policy within the current good manufacturing practices for better quality control has grown greatly. There are several steps involved in manufacturing gene therapy. These steps are for the autologous-type gene therapy, in chronological order, are harvesting T-cells from the patient, activation of the cells (thawing the cryogenically frozen cells after transport to manufacturing center), viral vector transduction, Chimeric Antigen Receptor (CAR) attachment during T-cell expansion, then infusion into patient. The need for improved measurement heuristics within the transduction and expansion portions of the manufacturing process has reached an all-time high because of the costly nature of manufacturing the product, the high cycle time (approximately 14-28 days from activation to infusion), and the risk for external contamination during manufacturing that negatively impacts patients post infusion (such as illness and death). The main objective of this work is to investigate and improve measurement policies on the basis of quality control in the transduction/expansion bio-manufacturing processes. More specifically, this study addresses the issue of measuring yield within the transduction/expansion phases of gene therapy. To do so, it was decided to model the process as a Markov Decision Process where the decisions being made are optimally chosen to create an overall optimal measurement policy; for a set of predefined parameters. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2020
82

Spoken Dialogue System for Information Navigation based on Statistical Learning of Semantic and Dialogue Structure / 意味・対話構造の統計的学習に基づく情報案内のための音声対話システム

Yoshino, Koichiro 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18614号 / 情博第538号 / 新制||情||95(附属図書館) / 31514 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 黒橋 禎夫, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
83

Stochastic Dynamic Optimization and Games in Operations Management

Wei, Wei 12 March 2013 (has links)
No description available.
84

Using Markov Decision Processes and Reinforcement Learning to Guide Penetration Testers in the Search for Web Vulnerabilities / Användandet av Markov Beslutsprocesser och Förstärkt Inlärning för att Guida Penetrationstestare i Sökandet efter Sårbarheter i Webbapplikationer

Pettersson, Anders, Fjordefalk, Ossian January 2019 (has links)
Bug bounties are an increasingly popular way of performing penetration tests of web applications. User statistics of bug bounty platforms show that a lot of hackers struggle to find bugs. This report explores a way of using Markov decision processes and reinforcement learning to help hackers find vulnerabilities in web applications by building a tool that suggests attack surfaces to examine and vulnerability reports to read to get the relevant knowledge. The attack surfaces, vulnerabilities and reports are all derived from a taxonomy of web vulnerabilities created in a collaborating project. A Markov decision process (MDP) was defined, this MDP includes the environment, different states of knowledge and actions that can take a user from one state of knowledge to another. To be able to suggest the best possible next action to perform, the MDP uses a policy that describes the value of entering each state. Each state is given a value that is called Q-value. This value indicates how close that state is to another state where a vulnerability has been found. This means that a state has a high Q-value if the knowledge gives a user a high probability of finding a vulnerability and vice versa. This policy was created using a reinforcement learning algorithm called Q-learning. The tool was implemented as a web application using Java Spring Boot and ReactJS. The resulting tool is best suited for new hackers in the learning process. The current version is trained on the indexed reports of the vulnerability taxonomy but future versions should be trained on user behaviour collected from the tool. / Bug bounties är ett alltmer populärt sätt att utföra penetrationstester av webbapplikationer. Användarstatistik från bug bounty-plattformar visar att många hackare har svårt att hitta buggar. Denna rapport undersöker ett sätt att använda Markov-beslutsprocesser och förstärkt inlärning för att hjälpa hackare att hitta sårbarheter i webbapplikationer genom att bygga ett verktyg som föreslår attackytor att undersöka och sårbarhetsrapporter att läsa för att tillgodogöra sig rätt kunskaper. Attackytor, sårbarheter och rapporter är alla hämtade från en taxonomi över webbsårbarheter skapad i ett samarbetande projekt. En Markovbeslutsprocess (MDP) definierades. Denna MDP inkluderar miljön, olika kunskapstillstånd och handlingar som kan ta användaren från ett kunskapstillstånd till ett annat. För kunna föreslå nästa handling på bästa möjliga sätt använder MDPn en policy som beskriver värdet av att träda in i alla de olika tillstånden. Alla tillstånd ges ett värde som kallas Q-värde. Detta värde indikerar hur nära ett tillstånd har till ett annat tillstånd där en sårbarhet har hittats. Detta betyder att ett tillstånd har ett högt Q-värde om kunskapen ger användaren en hög sannolikhet att hitta en sårbarhet och vice versa. Policyn skapades med hjälp av en typ av förstärkt inlärningsalgoritm kallad Q-inlärning. Verktyget implementerades som en webbapplikation med hjälp av Java Spring Boot och ReactJS. Det resulterande verktyget är bäst lämpat för nya hackare i inlärningsstadiet. Den nuvarande versionen är tränad på indexerade rapporter från sårbarhetstaxonomin men framtida versioner bör tränas på användarbeteende insamlat från verktyget.
85

Resource allocation and load-shedding policies based on Markov decision processes for renewable energy generation and storage

Jimenez, Edwards 01 January 2015 (has links)
In modern power systems, renewable energy has become an increasingly popular form of energy generation as a result of all the rules and regulations that are being implemented towards achieving clean energy worldwide. However, clean energy can have drawbacks in several forms. Wind energy, for example can introduce intermittency. In this thesis, we discuss a method to deal with this intermittency. In particular, by shedding some specific amount of load we can avoid a total system breakdown of the entire power plant. The load shedding method discussed in this thesis utilizes a Markov Decision Process with backward policy iteration. This is based on a probabilistic method that chooses the best load-shedding path that minimizes the expected total cost to ensure no power failure. We compare our results with two control policies, a load-balancing policy and a less-load shedding policy. It is shown that the proposed MDP policy outperforms the other control policies and achieves the minimum total expected cost.
86

Hidden Markov models : Identification, control and inverse filtering

Mattila, Robert January 2018 (has links)
The hidden Markov model (HMM) is one of the workhorse tools in, for example, statistical signal processing and machine learning. It has found applications in a vast number of fields, ranging all the way from bioscience to speech recognition to modeling of user interactions in social networks. In an HMM, a latent state transitions according to Markovian dynamics. The state is only observed indirectly via a noisy sensor – that is, it is hidden. This type of model is at the center of this thesis, which in turn touches upon three main themes. Firstly, we consider how the parameters of an HMM can be estimated from data. In particular, we explore how recently proposed methods of moments can be combined with more standard maximum likelihood (ML) estimation procedures. The motivation for this is that, albeit the ML estimate possesses many attractive statistical properties, many ML schemes have to rely on local-search procedures in practice, which are only guaranteed to converge to local stationary points in the likelihood surface – potentially inhibiting them from reaching the ML estimate. By combining the two types of algorithms, the goal is to obtain the benefits of both approaches: the consistency and low computational complexity of the former, and the high statistical efficiency of the latter. The filtering problem – estimating the hidden state of the system from observations – is of fundamental importance in many applications. As a second theme, we consider inverse filtering problems for HMMs. In these problems, the setup is reversed; what information about an HMM-filtering system is exposed by its state estimates? We show that it is possible to reconstruct the specifications of the sensor, as well as the observations that were made, from the filtering system’s posterior distributions of the latent state. This can be seen as a way of reverse engineering such a system, or as using an alternative data source to build a model. Thirdly, we consider Markov decision processes (MDPs) – systems with Markovian dynamics where the parameters can be influenced by the choice of a control input. In particular, we show how it is possible to incorporate prior information regarding monotonic structure of the optimal decision policy so as to accelerate its computation. Subsequently, we consider a real-world application by investigating how these models can be used to model the treatment of abdominal aortic aneurysms (AAAs). Our findings are that the structural properties of the optimal treatment policy are different than those used in clinical practice – in particular, that younger patients could benefit from earlier surgery. This indicates an opportunity for improved care of patients with AAAs. / <p>QC 20180301</p>
87

Optimal Call Admission Control Policies in Wireless Cellular Networks Using Semi Markov Decision Proces

Ni, Wenlong January 2008 (has links)
No description available.
88

Information Freshness and Delay Optimization in Unreliable Wireless Systems

Yao, Guidan 02 September 2022 (has links)
No description available.
89

Deep Reinforcement Learning for Open Multiagent System

Zhu, Tianxing 20 September 2022 (has links)
No description available.
90

Integrated and Coordinated Relief Logistics Planning Under Uncertainty for Relief Logistics Operations

Kamyabniya, Afshin 22 September 2022 (has links)
In this thesis, we explore three critical emergency logistics problems faced by healthcare and humanitarian relief service providers for short-term post-disaster management. In the first manuscript, we investigate various integration mechanisms (fully integrated horizontal-vertical, horizontal, and vertical resource sharing mechanisms) following a natural disaster for a multi-type whole blood-derived platelets, multi-patient logistics network. The goal is to reduce the amount of shortage and wastage of multi-blood-group of platelets in the response phase of relief logistics operations. To solve the logistics model for a large scale problem, we develop a hybrid exact solution approach involving an augmented epsilon-constraint and Lagrangian relaxation algorithms and demonstrate the model's applicability for a case study of an earthquake. Due to uncertainty in the number of injuries needing multi-type blood-derived platelets, we apply a robust optimization version of the proposed model which captures the expected performance of the system. The results show that the performance of the platelets logistics network under coordinated and integrated mechanisms better control the level of shortage and wastage compared with that of a non-integrated network. In the second manuscript, we propose a two-stage casualty evacuation model that involves routing of patients with different injury levels during wildfires. The first stage deals with field hospital selection and the second stage determines the number of patients that can be transferred to the selected hospitals or shelters via different routes of the evacuation network. The goal of this model is to reduce the evacuation response time, which ultimately increase the number of evacuated people from evacuation assembly points under limited time windows. To solve the model for large-scale problems, we develop a two-step meta-heuristic algorithm. To consider multiple sources of uncertainty, a flexible robust approach considering the worst-case and expected performance of the system simultaneously is applied to handle any realization of the uncertain parameters. The results show that the fully coordinated evacuation model in which the vehicles can freely pick up and off-board the patients at different locations and are allowed to start their next operations without being forced to return to the departure point (evacuation assembly points) outperforms the non-coordinated and non-integrated evacuation models in terms of number of evacuated patients. In the third manuscript, we propose an integrated transportation and hospital capacity model to optimize the assignment of relevant medical resources to multi-level-injury patients in the time of a MCI. We develop a finite-horizon MDP to efficiently allocate resources and hospital capacities to injured people in a dynamic fashion under limited time horizon. We solve this model using the linear programming approach to ADP, and by developing a two-phase heuristics based on column generation algorithm. The results show better policies can be derived for allocating limited resources (i.e., vehicles) and hospital capacities to the injured people compared with the benchmark. Each paper makes a worthwhile contribution to the humanitarian relief operations literature and can help relief and healthcare providers optimize resource and service logistics by applying the proposed integration and coordination mechanisms.

Page generated in 0.0854 seconds