Spelling suggestions: "subject:"6electronic data processing"" "subject:"belectronic data processing""
611 |
Naming and synchronization in a decentralized computer system.Reed, David Patrick, 1952- January 1979 (has links)
Thesis. 1979. Ph.D.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Vita. / Bibliography: leaves 212-216. / Ph.D.
|
612 |
Estimation of distribution algorithms with dependency learning. / CUHK electronic theses & dissertations collectionJanuary 2009 (has links)
Li, Gang. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 121-136). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
613 |
Application of a computer model in designing Kansas feedlot waste control systemsPeterson, Michael J January 2011 (has links)
Typescript. / Digitized by Kansas Correctional Industries
|
614 |
Personalized Policy Learning with Longitudinal mHealth DataHu, Xinyu January 2019 (has links)
Mobile devices, such as smartphones and wearable devices, have become a popular platform to deliver recommendations and interact with users. To learn the decision rule of assigning recommendations, i.e. policy, neither one homogeneous policy for all users nor completely heterogeneous policy for each user is appropriate. Many attempts have been made to learn a policy for making recommendations using observational mobile health (mHealth) data. The majority of them focuses on a homogeneous policy, that is a one-fit-to-all policy for all users. It is a fair starting point for mHealth study, but it ignores the underlying user heterogeneity. Users with similar behavior pattern may have unobservable underlying heterogeneity. To solve this problem, we develop a personalized learning framework that models both population and personalized effect simultaneously.
In the first part of this dissertation, we address the personalized policy learning problem using longitudinal mHealth application usage data. Personalized policy represents a paradigm shift from developing a single policy that may prescribe personalized decisions by tailoring. Specifically, we aim to develop the best policy, one per user, based on estimating random effects under generalized linear mixed model. With many random effects, we consider new estimation method and penalized objective to circumvent high-dimensional integrals for marginal likelihood approximation. We establish consistency and optimality of our method with endogenous application usage. We apply our method to develop personalized prompt schedules in 294 application users, with a goal to maximize the prompt response rate given past application usage and other contextual factors. We found the best push schedule given the same covariates varied among the users, thus calling for personalized policies. Using the estimated personalized policies would have achieved a mean prompt response rate of 23% in these users at 16 weeks or later: this is a remarkable improvement on the observed rate (11%), while the literature suggests 3%-15% user engagement at 3 months after download. The proposed method compares favorably to existing estimation methods including using the R function glmer in a simulation study.
In the second part of this dissertation, we aim to solve a practical problem in the mHealth area. Low response rate has been a major issue that blocks researchers from collecting high quality mHealth data. Therefore, developing a prompting system is important to keep user engagement and increase response rate. We aim to learn personalized prompting time for users in order to gain a high response rate. An extension of the personalized learning algorithm is applied on the Intellicare data that incorporates penalties of the population effect parameters and personalized effect parameters into learning the personalized decision rule of sending prompts. The number of personalized policy parameters increases with sample size. Since there is a large number of users in the Intellicare data, it is challenging to estimate such high dimensional parameters. To solve the computational issue, we employ a bagging method that first bootstraps subsamples and then ensembles parameters learned from each subsample. The analysis of Intellicare data shows that sending prompts at a personalized hour helps achieve a higher response rate compared to a one-fit-to-all prompting hour.
|
615 |
Stochastic dynamics and wavelets techniques for system response analysis and diagnostics: Diverse applications in structural and biomedical engineeringdos Santos, Ketson Roberto Maximiano January 2019 (has links)
In the first part of the dissertation, a novel stochastic averaging technique based on a Hilbert transform definition of the oscillator response displacement amplitude is developed. In comparison to standard stochastic averaging, the requirement of “a priori” determination of an equivalent natural frequency is bypassed, yielding flexibility in the ensuing analysis and potentially higher accuracy. Further, the herein proposed Hilbert transform based stochastic averaging is adapted for determining the time-dependent survival probability and first-passage time probability density function of stochastically excited nonlinear oscillators, even endowed with fractional derivative terms. To this aim, a Galerkin scheme is utilized to solve approximately the backward Kolmogorov partial differential equation governing the survival probability of the oscillator response. Next, the potential of the stochastic averaging technique to be used in conjunction with performance-based engineering design applications is demonstrated by proposing a stochastic version of the widely used incremental dynamic analysis (IDA). Specifically, modeling the excitation as a non-stationary stochastic process possessing an evolutionary power spectrum (EPS), an approximate closed-form expression is derived for the parameterized oscillator response amplitude probability density function (PDF). In this regard, IDA surfaces are determined providing the conditional PDF of the engineering demand parameter (EDP) for a given intensity measure (IM) value. In contrast to the computationally expensive Monte Carlo simulation, the methodology developed herein determines the IDA surfaces at minimal computational cost.
In the second part of the dissertation, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Several numerical examples are considered for assessing the reliability of the technique, even in the presence of incomplete and corrupted data. These include a 2-DOF time-variant Duffing oscillator endowed with fractional derivative terms, as well as a 2-DOF system subject to flow-induced forces where the non-stationary sea state possesses a recently proposed evolutionary version of the JONSWAP spectrum.
In the third part of this dissertation, a joint time-frequency analysis technique based on generalized harmonic wavelets (GHWs) is developed for dynamic cerebral autoregulation (DCA) performance quantification. DCA is the continuous counter-regulation of the cerebral blood flow by the active response of cerebral blood vessels to the spontaneous or induced blood pressure fluctuations. Specifically, various metrics of the phase shift and magnitude of appropriately defined GHW-based transfer functions are determined based on data points over the joint time-frequency domain. The potential of these metrics to be used as a diagnostics tool for indicating healthy versus impaired DCA function is assessed by considering both healthy individuals and patients with unilateral carotid artery stenosis. Next, another application in biomedical engineering is pursued related to the Pulse Wave Imaging (PWI) technique. This relies on ultrasonic signals for capturing the propagation of pressure pulses along the carotid artery, and eventually for prognosis of focal vascular diseases (e.g., atherosclerosis and abdominal aortic aneurysm). However, to obtain a high spatio-temporal resolution the data are acquired at a high rate, in the order of kilohertz, yielding large datasets. To address this challenge, an efficient data compression technique is developed based on the multiresolution wavelet decomposition scheme, which exploits the high correlation of adjacent RF-frames generated by the PWI technique. Further, a sparse matrix decomposition is proposed as an efficient way to identify the boundaries of the arterial wall in the PWI technique.
|
616 |
Optimization for Probabilistic Machine LearningFazelnia, Ghazal January 2019 (has links)
We have access to great variety of datasets more than any time in the history. Everyday, more data is collected from various natural resources and digital platforms. Great advances in the area of machine learning research in the past few decades have relied strongly on availability of these datasets. However, analyzing them imposes significant challenges that are mainly due to two factors. First, the datasets have complex structures with hidden interdependencies. Second, most of the valuable datasets are high dimensional and are largely scaled. The main goal of a machine learning framework is to design a model that is a valid representative of the observations and develop a learning algorithm to make inference about unobserved or latent data based on the observations. Discovering hidden patterns and inferring latent characteristics in such datasets is one of the greatest challenges in the area of machine learning research. In this dissertation, I will investigate some of the challenges in modeling and algorithm design, and present my research results on how to overcome these obstacles.
Analyzing data generally involves two main stages. The first stage is designing a model that is flexible enough to capture complex variation and latent structures in data and is robust enough to generalize well to the unseen data. Designing an expressive and interpretable model is one of crucial objectives in this stage. The second stage involves training learning algorithm on the observed data and measuring the accuracy of model and learning algorithm. This stage usually involves an optimization problem whose objective is to tune the model to the training data and learn the model parameters. Finding global optimal or sufficiently good local optimal solution is one of the main challenges in this step.
Probabilistic models are one of the best known models for capturing data generating process and quantifying uncertainties in data using random variables and probability distributions. They are powerful models that are shown to be adaptive and robust and can scale well to large datasets. However, most probabilistic models have a complex structure. Training them could become challenging commonly due to the presence of intractable integrals in the calculation. To remedy this, they require approximate inference strategies that often results in non-convex optimization problems. The optimization part ensures that the model is the best representative of data or data generating process. The non-convexity of an optimization problem take away the general guarantee on finding a global optimal solution. It will be shown later in this dissertation that inference for a significant number of probabilistic models require solving a non-convex optimization problem.
One of the well-known methods for approximate inference in probabilistic modeling is variational inference. In the Bayesian setting, the target is to learn the true posterior distribution for model parameters given the observations and prior distributions. The main challenge involves marginalization of all the other variables in the model except for the variable of interest. This high-dimensional integral is generally computationally hard, and for many models there is no known polynomial time algorithm for calculating them exactly. Variational inference deals with finding an approximate posterior distribution for Bayesian models where finding the true posterior distribution is analytically or numerically impossible. It assumes a family of distribution for the estimation, and finds the closest member of that family to the true posterior distribution using a distance measure. For many models though, this technique requires solving a non-convex optimization problem that has no general guarantee on reaching a global optimal solution. This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference.
The proposed convex relaxation technique is based on semidefinite optimization that has a general applicability to polynomial optimization problem. I will present theoretical foundations and in-depth details of this relaxation in this work. Linear dynamical systems represent the functionality of many real-world physical systems. They can describe the dynamics of a linear time-varying observation which is controlled by a controller unit with quadratic cost function objectives. Designing distributed and decentralized controllers is the goal of many of these systems, which computationally, results in a non-convex optimization problem. In this dissertation, I will further investigate the issues arising in this area and develop a convex relaxation framework to deal with the optimization challenges.
Setting the correct number of model parameters is an important aspect for a good probabilistic model. If there are only a few parameters, model may lack capturing all the essential relations and components in the observations while too many parameters may cause significant complications in learning or overfit to the observations. Non-parametric models are suitable techniques to deal with this issue. They allow the model to learn the appropriate number of parameters to describe the data and make predictions. In this dissertation, I will present my work on designing Bayesian non-parametric models as powerful tools for learning representations of data. Moreover, I will describe the algorithm that we derived to efficiently train the model on the observations and learn the number of model parameters.
Later in this dissertation, I will present my works on designing probabilistic models in combination with deep learning methods for representing sequential data. Sequential datasets comprise a significant portion of resources in the area of machine learning research. Designing models to capture dependencies in sequential datasets are of great interest and have a wide variety of applications in engineering, medicine and statistics. Recent advances in deep learning research has shown exceptional promises in this area. However, they lack interpretability in their general form. To remedy this, I will present my work on mixing probabilistic models with neural network models that results in better performance and expressiveness of the results.
|
617 |
MAE : a mobile agent environment for resource limited devicesMihailescu, Patrik, 1977- January 2003 (has links)
Abstract not available
|
618 |
Modelling and analysis of the resource reservation protocol using coloured petri netsVillapol, Maria January 2003 (has links)
The Resource Reservation Protocol (RSVP) is one of the proposals of the Internet Engineering Task Force (IETF) for conveying Quality of Service (QoS) related information in the form of resource reservations along the communication path. The RSVP specification (i.e. Request for Comments 2205) provides a narrative description of the protocol without any use of formal techniques. Thus, some parts of the document may be ambiguous, difficult to understand, and imprecise. So far, RSVP implementations have provided the only mechanism for validating. The cost for fixing errors in the protocol found in the implementation can be high. These disadvantages together with the fact that RSVP is complex make it a good target for formal specification and verification. This thesis formally defines the RSVP Service Specification, models RSVP using a formal method known as Coloured Petri Nets (CPNs) and attempts to verify the model. The following steps summarise the verification process of RSVP. Firstly, the RSVP service specification is derived from the protocol description and modelled using CPNs. After validating the model, the service language, which defines all the possible service primitive occurrence sequences, is generated from the state space of the model by using automata reduction techniques that preserve sequences. Secondly, RSVP is modelled using CPNs. The model is analysed for a set of behavioural properties. They include general properties of protocols, such as correct termination, and a set of new properties defined in this thesis, which are particular to RSVP. The analysis is based on the state space method. The properties are checked by querying the state graph and checking reachability among multiple nodes of its associated Strongly Connected Component (SCC) graph. As a first step, we analyse RSVP under the assumption of a perfect medium (no loss or duplication) to ensure that protocol errors are not hidden by rare events of the medium. The state space is reduced to obtain the sequences of service primitives allowed by RSVP known as the protocol language. Then, the protocol language is compared with the service language to determine if they are equivalent. The desired properties of RSVP are proved to be satisfied by the RSVP CPN model, so that the features of RSVP included in the CPN model operate as expected under our modelling and analysis assumptions. Also, the language analysis results show that RSVP service primitive occurrence sequences generated by the RSVP model are included in the proposed model of the service specification. However, some service primitive occurrence sequences generated from the service specification model are not in the protocol language. These sequences were analysed. There is strong evidence to suggest that these sequences would also appear in the protocol if the capacity of the medium in the RSVP model was marginally increased. Unfortunately, the standard reachability analysis tools would not handle this case, due to state space explosion. / Thesis (PhD)--University of South Australia, 2003
|
619 |
Spatially-structured niching methods for evolutionary algorithmsDick, Grant, n/a January 2008 (has links)
Traditionally, an evolutionary algorithm (EA) operates on a single population with no restrictions on possible mating pairs. Interesting changes to the behaviour of EAs emerge when the structure of the population is altered so that mating between individuals is restricted. Variants of EAs that use such populations are grouped into the field of spatially-structured EAs (SSEAs).
Previous research into the behaviour of SSEAs has primarily focused on the impact space has on the selection pressure in the system. Selection pressure is usually characterised by takeover times and the ratio between the neighbourhood size and the overall dimension of space. While this research has given indications into where and when the use of an SSEA might be suitable, it does not provide a complete coverage of system behaviour in SSEAs. This thesis presents new research into areas of SSEA behaviour that have been left either unexplored or briefly touched upon in current EA literature.
The behaviour of genetic drift in finite panmictic populations is well understood. This thesis attempts to characterise the behaviour of genetic drift in spatially-structured populations. First, an empirical investigation into genetic drift in two commonly encountered topologies, rings and torii, is performed. An observation is made that genetic drift in these two configurations of space is independent of the genetic structure of individuals and additive of the equivalent-sized panmictic population. In addition, localised areas of homogeneity present themselves within the structure purely as a result of drifting. A model based on the theory of random walks to absorbing boundaries is presented which accurately characterises the time to fixation through random genetic drift in ring topologies.
A large volume of research has gone into developing niching methods for solving multimodal problems. Previously, these techniques have used panmictic populations. This thesis introduces the concept of localised niching, where the typically global niching methods are applied to the overlapping demes of a spatially structured population. Two implementations, local sharing and local clearing are presented and are shown to be frequently faster and more robust to parameter settings, and applicable to more problems than their panmictic counterparts.
Current SSEAs typically use a single fitness function across the entire population. In the context of multimodal problems, this means each location in space attempts to discover all the optima. A preferable situation would be to use the inherent spatial properties of an SSEA to localise optimisation of peaks. This thesis adapts concepts from multiobjective optimisation with environmental gradients and applies them to multimodal problems. In addition to adapting to the fitness landscape, individuals evolve towards their preferred environmental conditions. This has the effect of separating individuals into regions that concentrate on different optima with the global fitness function. The thesis also gives insights into the expected number of individuals occupying each optima in the problem.
The SSEAs and related models developed in this thesis are of interest to both researchers and end-users of evolutionary computation. From the end-user�s perspective, the developed SSEAs require less a priori knowledge of a given problem domain in order to operate effectively, so they can be more readily applied to difficult, poorly-defined problems. Also, the theoretical findings of this thesis provides a more complete understanding of evolution within spatially-structured populations, which is of interest not only to evolutionary computation practitioners, but also to researchers in the fields of population genetics and ecology.
|
620 |
A framework and coordination technologies for peer-to-peer based decentralised workflow systemsYan, Jun, jyan@it.swin.edu.au January 2004 (has links)
This thesis investigates an innovative framework and process coordination technologies for peer-to-peer based decentralised workflow systems. The aim of this work is to address some of the unsolved problems in the contemporary workflow
research rudimentally from an architectural viewpoint. The problems addressed in this thesis, i.e., bad performance, vulnerability to failures, poor scalability, user restrictions, unsatisfactory system openness, and lack of support for incompletely specified processes, have become major obstacles for wide deployment of workflow in real-world. After an in-depth analysis of the above problems, this thesis reveals that most of these problems are mainly caused by the mismatch between application
nature, i.e., distributed, and system design, i.e., centralised management. Thus, the old-fashioned client-server paradigm which is conventionally used in most of today�s workflow systems should be replaced with a peer-to-peer based, open,collaborative and decentralised framework which can reflect workflow�s distributed
feature more naturally. Combining workflow technology and peer-to-peer computing technology,
SwinDeW which is a genuinely decentralised workflow approach is proposed in this thesis. The distinguished design of SwinDeW removes both the centralised data repository and the centralised workflow engine from the system. Hence, workflow
participants are facilitated by automated peers which are able to communicate and collaborate with one another directly to fulfil both build-time and run-time workflow functions. To achieve this goal, an innovative data storage approach, known as �know what you should know�, is proposed, which divides a process model into
individual task partitions and distributes each partition to relevant peers properly according to the capability match. Based on such a data storage approach, the novel mechanisms for decentralised process instantiation, instance execution and execution monitoring are explored. Moreover, SwinDeW is further extended to support incompletely-specified processes in the decentralised environment. New technologies for handling incompletely-specified processes at run-time are presented. The major contributions of this research are an innovative, decentralised
workflow system framework and corresponding process coordination technologies for system functionality. Issues regarding system performance, reliability, scalability,user support, system openness, and incompletely-specified process support are discussed deeply. Moreover, this thesis also contributes the SwinDeW prototype which implements and demonstrates this design and functionality for proof-of concept purposes. With these outcomes, performance bottlenecks in workflow systems are likely to be eliminated whilst increased resilience to failure, enhanced scalability, better user support and improved system openness are likely to be achieved with support for both completely- and incompletely-specified processes.
As a consequence, workflow systems will be expected to be widely deployable to real world applications to support processes, which was infeasible before.
|
Page generated in 0.1357 seconds