301 |
Providing Context in WS-BPEL ProcessesGeorge, Allen Ajit January 2008 (has links)
Business processes are increasingly used by organizations to automate their activities. Written in languages like WS-BPEL, they allow an institution to describe precisely its internal operations. As the pace of change increases, however, both organizations and their internal processes are required to be more flexible; they have to account for an increasing amount of externally-driven environment state, or context, and modify their behavior accordingly. This puts a significant burden on business-process programmers, who now have to source, track, and update context from multiple entities, in addition to implementing and maintaining core business logic. Implementing this state-maintenance logic in a WS-BPEL business process is involved. This is because WS-BPEL business processes are modeled as if they were the only thing operating in, and making changes to, the business environment. This mental model does not reflect the real world, where organizations and entities depend on state that is outside their control – state that is modified independent of, and concurrent with, the organization’s activities. This makes it hard for business-process programmers to write context-dependent processes in a concise manner.
This thesis presents a solution to this problem based on the notion of a context variable for WS-BPEL business processes. It describes how context variables are designed using the WS-BPEL language-extension mechanism, and how these variables can be used in business processes. It also outlines an architecture for offering context in the web services environment that uses constructs from the WS-Resource Framework specification. It shows how changes in context can be propagated from these context sources to WS-BPEL context variables using WS-Notification-based publish/subscribe. The design also includes a standards-compliant method for extending web-service responses with references to context sources. Finally, a prototype validating the overall system is described, and enhancements for increasing the utility of context variables proposed.
This solution offers significant advantages: it builds on established practices and well-understood message-exchange patterns, leverages widely used languages, frameworks and specifications, is standards compliant, and has a low barrier-to-entry for business-process programmers. Moreover, when compared to existing alternatives, this solution requires significantly less process logic and fewer interface changes to maintain constantly changing environment state.
|
302 |
Joint Compression and Watermarking Using Variable-Rate Quantization and its Applications to JPEGZhou, Yuhan January 2008 (has links)
In digital watermarking, one embeds a watermark into a covertext, in such a way that
the resulting watermarked signal is robust to a certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. In addition to the robustness, there are two other conflicting requirements a good watermarking system should meet: one is referred as perceptual quality, that is, the distortion incurred to the original signal should be small; and the other is payload, the amount of information embedded (embedding rate) should be as high as possible. To a large extent, digital watermarking is a science and/or art aiming to design watermarking systems meeting these three conflicting requirements. As watermarked signals are highly desired to be compressed in real world applications, we have looked into the design and analysis of joint watermarking and compression (JWC) systems to achieve efficient tradeoffs among the embedding rate, compression rate, distortion and robustness.
Using variable-rate scalar quantization, an optimum encoding and decoding scheme for JWC systems is designed and analyzed to maximize the robustness in the presence of additive Gaussian attacks under constraints on both compression distortion and composite rate. Simulation results show that in comparison with the previous work of designing JWC systems using fixed-rate scalar quantization, optimum JWC systems using variable-rate scalar quantization can achieve better performance in the distortion-to-noise ratio region of practical interest.
Inspired by the good performance of JWC systems, we then investigate its applications in image compression. We look into the design of a joint image compression and blind watermarking system to
maximize the compression rate-distortion performance while maintaining baseline JPEG decoder compatibility and satisfying the additional constraints imposed by watermarking. Two watermarking embedding schemes, odd-even watermarking (OEW) and zero-nonzero watermarking (ZNW), have been proposed for the robustness to a class of standard JPEG recompression attacks.
To maximize the compression performance, two corresponding alternating algorithms have been
developed to jointly optimize run-length coding, Huffman coding and quantization table selection subject to the additional constraints imposed by OEW and ZNW respectively. Both of two algorithms have been demonstrated to have better compression performance than the DQW and DEW algorithms developed in the recent literature. Compared with OEW scheme, the ZNW embedding method sacrifices some payload but earns more robustness against other types of attacks. In particular, the zero-nonzero watermarking scheme can survive a class of valumetric distortion attacks including additive noise, amplitude changes and recompression for everyday usage.
|
303 |
Investigating the use of variable fluorescence methods to detect phytoplankton nutrient deficiencyMajarreis, Joanna 06 1900 (has links)
Variable fluorescence of chlorophyll a (Fv/Fm), measured by pulse amplitude modulated (PAM) fluorometers, is an attractive target for phytoplankton-related water quality management. Lowered Fv/Fm is believed to reflect the magnitude of nutrient sufficiency or deficiency in phytoplankton. This rapid and specific metric is relevant to Lake Erie, which often experiences problematic Cyanobacteria blooms. It is unknown whether PAMs reliably measure phytoplankton nutrient status or if different PAMs provide comparable results. Water samples collected from Lake Erie and two Lake Ontario sites in July and September 2011 were analysed using alkaline phosphatase assay (APA), P-debt, and N-debt to quantify phytoplankton nutrient status and with three different PAM models (PhytoPAM, WaterPAM and DivingPAM) to determine Fv/Fm. The Lake Ontario, Lake Erie East and Central Basin sites were all N- and P-deficient in July, but only the East and Central Basin and one Lake Ontario site were P-deficient in September. The West Basin sites were P-deficient in July and one West Basin site and a river site were N-deficient in September. Between-instrument Fv/Fm comparisons did not show the expected 1:1 relationship. Fv/Fm from the PhytoPAM and WaterPAM were well-correlated with each other but not with nutrient deficiency. DivingPAM Fv/Fm did not correlate with the other PAM models, but correlated with P-deficiency. Spectral PAM fluorometers (PhytoPAM) can potentially resolve Fv/Fm down to phytoplankton group by additionally measuring accessory pigment fluorescence. The nutrient-induced fluorescent transient (NIFT) is the observation that Fv/Fm drops immediately and recovers when the limiting nutrient is reintroduced to nutrient-starved phytoplankton. A controlled laboratory experiment was conducted on a 2x2 factorial mixture design of P-deficient and P-sufficient Asterionella formosa and Microcystis aeruginosa cultures. Patterns consistent with published reports of NIFT were observed for P-deficient M. aeruginosa in mixtures; the pattern for A. formosa was less clear. This thesis showed that Fv/Fm by itself was not a reliable metric of N or P deficiency and care must be taken when interpreting results obtained by different PAM fluorometers. NIFT analysis using spectral PAM fluorometers may be able to discriminate P-deficiency in M. aeruginosa, and possibly other Cyanobacteria, in mixed communities.
|
304 |
Undersökning av affinitet till TS1-218, TS1-2182 och HE1-Q enkelkedjeantikroppar i multicellulära tumörsfäroider cytokeratin 8 för TS1-218, TS1-2182 och HE1-Q enkelkedjeantikroppar i multicellulära tumörsfäroider / Investigation of affinity to cytokeratin 8 in multicellular tumor spheroids for TS1-218, TS1-2182 and HE1-Q single chain variable fragment antibodiesPiercecchi, Marco January 2009 (has links)
In vitro-test för upptäckt och behandling av tumör eller mikrometastaser har de senaste 30 åren gjort stora framsteg tack vare immunokemi och nya framgångsrika cellodlings- tekniker som bättre reproducerar celltillväxt i tre dimensioner (3D) och det omgivande stromat (multicellulär tumörsfäroidodling). TS1-218 scFv (single chain variable fragment) är en monoklonal antikropp som har affinitet till ett protein tillhörande cytoskelettet (cytokeratin). Av TS1-218 har skapats olika varianter (en dimer TS1-2182 och en mutant HE1-Q) med syftet att öka affinitet och retentionstid på platsen för dess verkan. I det här projektet försökte vi att testa och jämföra egenskaper hos alla 3 joderade antikropparna genom att inkubera odlade Hela Hep 2 tumörcellssfäroider med dessa antikroppar. Alla tre antikroppsvarianter visade god förmåga att penetrera sfäroider och att binda deras epitop i cytokeratin 8. Försöken visade att det fanns affinitetsskillnader mellan TS1-218 monomer, dimer och mutant vilket visade sig som olika inbindningsförmåga till sfäroiderna.
|
305 |
Perception-based second generation image coding using variable resolution / Perceptionsbaserad andra generationens bildkodning med variabel upplösningRydell, Joakim January 2003 (has links)
In ordinary image coding, the same image quality is obtained in all parts of an image. If it is known that there is only one viewer, and where in the image that viewer is focusing, the quality can be degraded in other parts of the image without incurring any perceptible coding artefacts. This master's thesispresents a coding scheme where an image is segmented into homogeneous regions which are then separately coded, and where knowledge about the user's focus point is used to obtain further data reduction. It is concluded that the coding performance does not quite reach the levels attained when applying focus-based quality degradation to coding schemes not based on segmentation.
|
306 |
An Evaluation of the Safety and Operational Impacts of a Candidate Variable Speed Limit Control Strategy on an Urban FreewayAllaby, Peter January 2006 (has links)
Variable Speed Limit Sign (VSLS) systems enable transportation managers to dynamically change the posted speed limit in response to prevailing traffic and/or weather conditions. VSLS are thought to improve safety and reduce driver stress while improving traffic flow and travel times. Although VSLS have been implemented in a limited number of jurisdictions throughout the world, there is currently very limited documentation describing the quantitative safety and operational impacts. The impacts that have been reported are primarily from systems in Europe, and may not be directly transferable to other jurisdictions, such as North America. Furthermore, although a number of modelling studies have been performed to date that quantify the impacts of VSLS, the VSLS control strategies are often too complex or based on unrealistic assumptions and therefore cannot be directly applied for practical applications. Consequently, a need exists for an evaluation framework that quantifies the safety and traffic performance impacts of comprehensive VSLS control strategies suitable for practical applications in North America. This paper presents the results of an evaluation of a candidate VSLS system for an urban freeway in Toronto, Canada. The evaluation was conducted using a microscopic simulation model (i. e. a model that predicts individual vehicle movements) combined with a categorical crash potential model for estimating safety impacts. <br /><br /> The objectives of this thesis are: 1) to validate a real-time crash prediction model for a candidate section of freeway; 2) to develop a candidate VSLS control algorithm with potential for practical applications; 3) to evaluate the performance of the VSLS control strategy for a range of traffic conditions in terms of safety and travel time; and 4) to test the sensitivity of the VSLS impact results to modifications of the control algorithm. <br /><br /> The analysis of the VSLS impacts under varying levels of traffic congestion indicated that the candidate control strategy was able to provide large safety benefits without a significant travel time penalty, but only for a limited range of traffic conditions. The tested algorithm was found to be insufficiently robust to operate effectively over a wide range of traffic conditions. However, by modifying parameters of the control algorithm, preliminary analysis identified potential improvements in the performance of the VSLS. The modified control strategy resulted in less overall travel time penalty without an adverse impact on the safety benefits. It is anticipated that further modifications to the VSLS control strategy could result in a VSLS that is able to operate over a wide range of traffic conditions and provide more consistent safety and travel time benefits, and it is recommended that the framework used in this study is an effective tool for optimizing the algorithm structure and parameter values.
|
307 |
Numerical Methods for Long-Term Impulse Control Problems in FinanceBelanger, Amelie January 2008 (has links)
Several of the more complex optimization problems in finance can be characterized as impulse control problems. Impulse control problems can be written as quasi-variational inequalities, which are then solved to determine the optimal control strategy. Since most quasi-variational inequalities do not have analytical solutions, numerical methods are generally used in the solution process.
In this thesis, the impulse control problem framework is applied to value two complex long-term option-type contracts. Both pricing problems considered are cast as impulse control problems and solved using an implicit approach based on either the penalty method or the operator splitting scheme.
The first contract chosen is an exotic employee stock option referred to as an infinite reload option. This contract provides the owner with an infinite number of reload opportunities. Each time a reload occurs, the owner pays the strike price using pre-owned company shares and, in return, receives one share for each option exercised and a portion of a new reload option. Numerical methods based on the classic Black-Scholes equation are developed while taking into account contract features such as vesting periods. In addition, the value of an infinite reload option to it's owner is obtained by using a utility maximization approach.
The second long-term contract considered is a variable annuity with a guaranteed minimum death benefit (GMDB) clause. Numerical methods are developed to determine the cost of the GMDB clause while including features such as partial withdrawals. The pricing model is then used to determine the fair insurance charge which minimizes the cost of the contract to the issuer. Due to the long maturity of variable annuities, non-constant market parameters expressed through the use of regime-switching are included in the GMDB pricing model.
|
308 |
Providing Context in WS-BPEL ProcessesGeorge, Allen Ajit January 2008 (has links)
Business processes are increasingly used by organizations to automate their activities. Written in languages like WS-BPEL, they allow an institution to describe precisely its internal operations. As the pace of change increases, however, both organizations and their internal processes are required to be more flexible; they have to account for an increasing amount of externally-driven environment state, or context, and modify their behavior accordingly. This puts a significant burden on business-process programmers, who now have to source, track, and update context from multiple entities, in addition to implementing and maintaining core business logic. Implementing this state-maintenance logic in a WS-BPEL business process is involved. This is because WS-BPEL business processes are modeled as if they were the only thing operating in, and making changes to, the business environment. This mental model does not reflect the real world, where organizations and entities depend on state that is outside their control – state that is modified independent of, and concurrent with, the organization’s activities. This makes it hard for business-process programmers to write context-dependent processes in a concise manner.
This thesis presents a solution to this problem based on the notion of a context variable for WS-BPEL business processes. It describes how context variables are designed using the WS-BPEL language-extension mechanism, and how these variables can be used in business processes. It also outlines an architecture for offering context in the web services environment that uses constructs from the WS-Resource Framework specification. It shows how changes in context can be propagated from these context sources to WS-BPEL context variables using WS-Notification-based publish/subscribe. The design also includes a standards-compliant method for extending web-service responses with references to context sources. Finally, a prototype validating the overall system is described, and enhancements for increasing the utility of context variables proposed.
This solution offers significant advantages: it builds on established practices and well-understood message-exchange patterns, leverages widely used languages, frameworks and specifications, is standards compliant, and has a low barrier-to-entry for business-process programmers. Moreover, when compared to existing alternatives, this solution requires significantly less process logic and fewer interface changes to maintain constantly changing environment state.
|
309 |
Joint Compression and Watermarking Using Variable-Rate Quantization and its Applications to JPEGZhou, Yuhan January 2008 (has links)
In digital watermarking, one embeds a watermark into a covertext, in such a way that
the resulting watermarked signal is robust to a certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. In addition to the robustness, there are two other conflicting requirements a good watermarking system should meet: one is referred as perceptual quality, that is, the distortion incurred to the original signal should be small; and the other is payload, the amount of information embedded (embedding rate) should be as high as possible. To a large extent, digital watermarking is a science and/or art aiming to design watermarking systems meeting these three conflicting requirements. As watermarked signals are highly desired to be compressed in real world applications, we have looked into the design and analysis of joint watermarking and compression (JWC) systems to achieve efficient tradeoffs among the embedding rate, compression rate, distortion and robustness.
Using variable-rate scalar quantization, an optimum encoding and decoding scheme for JWC systems is designed and analyzed to maximize the robustness in the presence of additive Gaussian attacks under constraints on both compression distortion and composite rate. Simulation results show that in comparison with the previous work of designing JWC systems using fixed-rate scalar quantization, optimum JWC systems using variable-rate scalar quantization can achieve better performance in the distortion-to-noise ratio region of practical interest.
Inspired by the good performance of JWC systems, we then investigate its applications in image compression. We look into the design of a joint image compression and blind watermarking system to
maximize the compression rate-distortion performance while maintaining baseline JPEG decoder compatibility and satisfying the additional constraints imposed by watermarking. Two watermarking embedding schemes, odd-even watermarking (OEW) and zero-nonzero watermarking (ZNW), have been proposed for the robustness to a class of standard JPEG recompression attacks.
To maximize the compression performance, two corresponding alternating algorithms have been
developed to jointly optimize run-length coding, Huffman coding and quantization table selection subject to the additional constraints imposed by OEW and ZNW respectively. Both of two algorithms have been demonstrated to have better compression performance than the DQW and DEW algorithms developed in the recent literature. Compared with OEW scheme, the ZNW embedding method sacrifices some payload but earns more robustness against other types of attacks. In particular, the zero-nonzero watermarking scheme can survive a class of valumetric distortion attacks including additive noise, amplitude changes and recompression for everyday usage.
|
310 |
Contributions to the Analysis of Experiments Using Empirical Bayes TechniquesDelaney, James Dillon 10 July 2006 (has links)
Specifying a prior distribution for the large number of parameters in the linear statistical model is a difficult step in the Bayesian approach to the design and analysis of experiments. Here we address this difficulty by proposing the use of functional priors and then by working out important details for three and higher level experiments. One of the challenges presented by higher level experiments is that a factor can be either qualitative or quantitative. We propose appropriate correlation functions and coding schemes so that the prior distribution is simple and the results easily interpretable. The prior incorporates well known experimental design principles such as effect hierarchy and effect heredity, which helps to automatically resolve the aliasing problems experienced in fractional designs.
The second part of the thesis focuses on the analysis of optimization experiments. Not uncommon are designed experiments with their primary purpose being to determine optimal settings for all of the factors in some predetermined set. Here we distinguish between the two concepts of statistical significance and practical significance. We perform estimation via an empirical Bayes data analysis methodology that has been detailed in the recent literature. But then propose an alternative to the usual next step in determining optimal factor level settings. Instead of implementing variable or model selection techniques, we propose an objective function that assists in our goal of finding the ideal settings for all factors over which we experimented. The usefulness of the new approach is illustrated through the analysis of some real experiments as well as simulation.
|
Page generated in 0.0616 seconds