421 |
Example-based Rendering of Textural PhenomenaKwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented.
For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar.
We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis.
A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
|
422 |
Modeling, Optimization and Power Efficiency Comparison of High-speed Inter-chip Electrical and Optical Interconnect Architectures in Nanometer CMOS TechnologiesPalaniappan, Arun 2010 December 1900 (has links)
Inter-chip input-output (I/O) communication bandwidth demand, which rapidly scaled with integrated circuit scaling, has leveraged equalization techniques to operate reliably on band-limited channels at additional power and area complexity. High-bandwidth inter-chip optical interconnect architectures have the potential to address this increasing I/O bandwidth. Considering future tera-scale systems, power dissipation of the high-speed I/O link becomes a significant concern. This work presents a design flow for the power optimization and comparison of high-speed electrical and optical links at a given data rate and channel type in 90 nm and 45 nm CMOS technologies.
The electrical I/O design framework combines statistical link analysis techniques, which are used to determine the link margins at a given bit-error rate (BER), with circuit power estimates based on normalized transistor parameters extracted with a constant current density methodology to predict the power-optimum equalization architecture, circuit style, and transmit swing at a given data rate and process node for three different channels. The transmitter output swing is scaled to operate the link at optimal power efficiency. Under consideration for optical links are a near-term architecture consisting of discrete vertical-cavity surface-emitting lasers (VCSEL) with p-i-n photodetectors (PD) and three long-term integrated photonic architectures that use waveguide metal-semiconductor-metal (MSM) photodetectors and either electro-absorption modulator (EAM), ring resonator modulator (RRM), or Mach-Zehnder modulator (MZM) sources. The normalized transistor parameters are applied to jointly optimize the transmitter and receiver circuitry to minimize total optical link power dissipation for a specified data rate and process technology at a given BER.
Analysis results shows that low loss channel characteristics and minimal circuit complexity, together with scaling of transmitter output swing, allows electrical links to achieve excellent power efficiency at high data rates. While the high-loss channel is primarily limited by severe frequency dependent losses to 12 Gb/s, the critical timing path of the first tap of the decision feedback equalizer (DFE) limits the operation of low-loss channels above 20 Gb/s. Among the optical links, the VCSEL-based link is limited by its bandwidth and maximum power levels to a data rate of 24 Gb/s whereas EAM and RRM are both attractive integrated photonic technologies capable of scaling data rates past 30 Gb/s achieving excellent power efficiency in the 45 nm node and are primarily limited by coupling and device insertion losses. While MZM offers robust operation due to its wide optical bandwidth, significant improvements in power efficiency must be achieved to become applicable for high density applications.
|
423 |
Development Of Multi-layered Circuit Analog Radar Absorbing StructuresYildirim, Egemen 01 July 2012 (has links) (PDF)
A fast and efficient method for the design of multi-layered circuit analog absorbing structures is developed. The method is based on optimization of specular reflection coefficient of a multi-layered absorbing structure comprising of lossy FSS layers by using Genetic Algorithm and circuit equivalent models of FSS layers. With the introduced method, two illustrative absorbing structures are designed with -15 dB reflectivity for normal incidence case in the frequency bands of 10-31 GHz and 5-46 GHz, respectively. To the author&rsquo / s knowledge, designed absorbers are superior in terms of frequency bandwidth to similar studies conducted so far in the literature. For broadband scattering characterization of periodic structures, numerical codes are developed. The introduced method is improved with the employment of developed FDTD codes to the proposed method. By taking the limitations regarding production facilities into consideration, a five-layered circuit analog absorber is designed and manufactured. It is shown that the manufactured structure is capable of 15 dB reflectivity minimization in a frequency band of 3.2-12 GHz for normal incidence case with an overall thickness of 14.2 mm.
|
424 |
Data-driven transform optimization for next generation multimedia applicationsSezer, Osman Gokhan 25 August 2011 (has links)
The objective of this thesis is to formulate a generic dictionary learning method with the guiding principle that states: Efficient representations lead to efficient estimations. The fundamental idea behind using transforms or dictionaries for signal representation is to exploit the regularity within data samples such that the redundancy of the representation is minimized subject to a level of fidelity. This observation translates to rate-distortion cost in compression literature, where a transform that has the lowest rate-distortion cost provides a more efficient representation than the others.
In our work, rather than using as an analysis tool, the rate-distortion cost is utilized to improve the efficiency of transforms. For this, an iterative optimization method is proposed, which seeks an orthonormal transform that reduces the expected value of rate-distortion cost of an ensemble of data. Due to the generic nature of the new optimization method, one can design a set of orthonormal transforms either in the original signal domain or on the top of a transform-domain representation. To test this claim, several image codecs are designed, which use block-, lapped- and wavelet-transform structures. Significant increases in compression performances are observed compared to original methods. An extension of the proposed optimization method for video coding gave us state-of-the-art compression results with separable transforms. Also using the robust statistics, an explanation to the superiority of new design over other learning-based methods such as Karhunen-Loeve transform is provided. Finally, the new optimization method and the minimization of the "oracle" risk of diagonal estimators in signal estimation is shown to be equal. With the design of new diagonal estimators and the risk-minimization-based adaptation, a new image denoising algorithm is proposed. While these diagonal estimators denoise local image patches, by formulation the optimal fusion of overlapping local denoised estimates, the new denoising algorithm is scaled to operate on large images. In our experiments, the state-of-the-art results for transform-domain denoising are achieved.
|
425 |
On Generalized Measures Of Information With Maximum And Minimum Entropy PrescriptionsDukkipati, Ambedkar 03 1900 (has links)
Kullback-Leibler relative-entropy or KL-entropy of P with respect to R defined as ∫xlnddPRdP , where P and R are probability measures on a measurable space (X, ), plays a basic role in the
definitions of classical information measures. It overcomes a shortcoming of Shannon entropy – discrete case definition of which cannot be extended to nondiscrete case naturally. Further, entropy and other classical information measures can be expressed in terms of KL-entropy and
hence properties of their measure-theoretic analogs will follow from those of measure-theoretic KL-entropy. An important theorem in this respect is the Gelfand-Yaglom-Perez (GYP) Theorem which equips KL-entropy with a fundamental definition and can be stated as: measure-theoretic KL-entropy equals the supremum of KL-entropies over all measurable partitions of X . In this thesis we provide the measure-theoretic formulations for ‘generalized’ information measures, and
state and prove the corresponding GYP-theorem – the ‘generalizations’ being in the sense of R ´enyi and nonextensive, both of which are explained below.
Kolmogorov-Nagumo average or quasilinear mean of a vector x = (x1, . . . , xn) with respect to a pmf p= (p1, . . . , pn)is defined ashxiψ=ψ−1nk=1pkψ(xk), whereψis an arbitrarycontinuous and strictly monotone function. Replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo averages (KN-averages) and further imposing the additivity constraint – a characteristic property of underlying information associated with single event, which is logarithmic – leads to the definition of α-entropy or R ´enyi entropy. This is the first formal well-known generalization of Shannon entropy. Using this recipe of R´enyi’s generalization, one can prepare only two information measures: Shannon and R´enyi entropy. Indeed, using this formalism R´enyi
characterized these additive entropies in terms of axioms of KN-averages. On the other hand, if one generalizes the information of a single event in the definition of Shannon entropy, by replacing the logarithm with the so called q-logarithm, which is defined as lnqx =x1− 1 −1 −q , one gets
what is known as Tsallis entropy. Tsallis entropy is also a generalization of Shannon entropy but it does not satisfy the additivity property. Instead, it satisfies pseudo-additivity of the form x ⊕qy = x + y + (1 − q)xy, and hence it is also known as nonextensive entropy. One can apply
R´enyi’s recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. A natural question that
arises is what are the various pseudo-additive information measures that can be prepared with this recipe? We prove that Tsallis entropy is the only one. Here, we mention that one of the important characteristics of this generalized entropy is that while canonical distributions resulting from ‘maximization’ of Shannon entropy are exponential in nature, in the Tsallis case they result in power-law distributions.
The concept of maximum entropy (ME), originally from physics, has been promoted to a general principle of inference primarily by the works of Jaynes and (later on) Kullback. This connects information theory and statistical mechanics via the principle: the states of thermodynamic equi-
librium are states of maximum entropy, and further connects to statistical inference via select the probability distribution that maximizes the entropy. The two fundamental principles related to the concept of maximum entropy are Jaynes maximum entropy principle, which involves maximizing Shannon entropy and the Kullback minimum entropy principle that involves minimizing relative-entropy, with respect to appropriate moment constraints.
Though relative-entropy is not a metric, in cases involving distributions resulting from relative-entropy minimization, one can bring forth certain geometrical formulations. These are reminiscent of squared Euclidean distance and satisfy an analogue of the Pythagoras’ theorem. This property
is referred to as Pythagoras’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches to statistical estimation theory like information geometry. In this thesis we state and prove the equivalent of Pythagoras’ theorem in the
nonextensive formalism. For this purpose we study relative-entropy minimization in detail and present some results.
Finally, we demonstrate the use of power-law distributions, resulting from ME-rescriptions
of Tsallis entropy, in evolutionary algorithms. This work is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics.
To sum up, in light of their well-known axiomatic and operational justifications, this thesis establishes some results pertaining to the mathematical significance of generalized measures of information. We believe that these results represent an important contribution towards the ongoing
research on understanding the phenomina of information.
(For formulas pl see the original document)
ii
|
426 |
Interference Management For Vector Gaussian Multiple Access ChannelsPadakandla, Arun 03 1900 (has links)
In this thesis, we consider a vector Gaussian multiple access channel (MAC) with users demanding reliable communication at specific (Shannon-theoretic) rates. The objective is to assign vectors and powers to these users such that their rate requirements are met and the sum of powers received is minimum.
We identify this power minimization problem as an instance of a separable convex optimization problem with linear ascending constraints. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. This provides a complete characterization of the minimum sum power for the vector Gaussian multiple access channel. Furthermore, we prove a strong duality between the above sum power minimization problem and the problem of sum rate maximization under power constraints.
We then propose finite step algorithms to explicitly identify an assignment of vectors and powers that solve the above power minimization and sum rate maximization problems. The distinguishing feature of the proposed algorithms is the size of the output vector sets. In particular, we prove an upper bound on the size of the vector sets that is independent of the number of users.
Finally, we restrict vectors to an orthonormal set. The goal is to identify an assignment of vectors (from an orthonormal set) to users such that the user rate requirements is met with minimum sum power. This is a combinatorial optimization problem. We study the complexity of the decision version of this problem. Our results indicate that when the dimensionality of the vector set is part of the input, the decision version is NP-complete.
|
427 |
Gibbs free energy minimization for flow in porous mediaVenkatraman, Ashwin 25 June 2014 (has links)
CO₂ injection in oil reservoirs provides the dual benefit of increasing oil recovery as well as sequestration. Compositional simulations using phase behavior calculations are used to model miscibility and estimate oil recovery. The injected CO₂, however, is known to react with brine. The precipitation and dissolution reactions, especially with carbonate rocks, can have undesirable consequences. The geochemical reactions can also change the mole numbers of components and impact the phase behavior of hydrocarbons. A Gibbs free energy framework that integrates phase equilibrium computations and geochemical reactions is presented in this dissertation. This framework uses the Gibbs free energy function to unify different phase descriptions - Equation of State (EOS) for hydrocarbon components and activity coefficient model for aqueous phase components. A Gibbs free energy minimization model was developed to obtain the equilibrium composition for a system with not just phase equilibrium (no reactions) but also phase and chemical equilibrium (with reactions). This model is adaptable to different reservoirs and can be incorporated in compositional simulators. The Gibbs free energy model is used for two batch calculation applications. In the first application, solubility models are developed for acid gases (CO₂ /H2 S) in water as well as brine at high pressures (0.1 - 80 MPa) and high temperatures (298-393 K). The solubility models are useful for formulating acid gas injection schemes to ensure continuous production from contaminated gas fields as well as for CO₂ sequestration. In the second application, the Gibbs free energy approach is used to predict the phase behavior of hydrocarbon mixtures - CO₂ -nC₁₄ H₃₀ and CH₄ -CO₂. The Gibbs free energy model is also used to predict the impact of geochemical reactions on the phase behavior of these two hydrocarbon mixtures. The Gibbs free energy model is integrated with flow using operator splitting to model an application of cation exchange reactions between aqueous phase and the solid surface. A 1-D numerical model to predict effluent concentration for a system with three cations using the Gibbs free energy minimization approach was observed to be faster than an equivalent stoichiometric approach. Analytical solutions were also developed for this system using the hyperbolic theory of conservation laws and are compared with experimental results available at laboratory and field scales. / text
|
428 |
Environmental levy and green citizenship on plastic shopping bags behaviours in Hong KongWong, Wing-sum., 黃詠森. January 2012 (has links)
The Environmental Levy Scheme on Plastic Shopping was enforced in July 2009. The Levy aimed to create a direct fiscal disincentive to reduce the indiscriminate use of Plastic Shopping Bags and encourage consumers to switch to reusable shopping bags. In theory, fiscal instruments are more efficient and effective to change people’s behaviour, but its impacts towards attitude are still in question. Also, the level of green citizenship, which emphasising that people have the responsibility to protect and sustain the environment, is a good indicator to know people’s attitudes towards the environment, but the Hong Kong government tends to rely on fiscal disincentives to change people’s behaviour, Green Citizenship had never been addressed. Green citizenship is a personal commitment to learn more about the environment and to take responsible environmental action. Environmental citizenship encourages individuals, communities and organizations to think about the environmental rights and responsibilities we all have as residents of the planet Earth (Environmental Canada, 2006). This study carried out a questionnaire research to identify the policy effect that the Levy in Hong Kong have on environmental attitudes and behaviours, as well as to identify the relative impact of economic incentive versus Green Citizenship on green attitudes and behaviours. The survey was conducted from 25th April to 9th May 2012 for two weeks in the form of internet survey. The research found that the Environmental Levy Scheme on Plastic Shopping Bags affected citizens’ behaviour and attitude to reduce the use of plastic shopping bags, and also changed people’s behavioural intention to act pro-environmentally, if their beliefs are strong enough to override the disadvantages brought by pro-environmentally actions. However, the level of green citizenship in Hong Kong is still in a private level, the sense of green citizenship of the society is still weak, thus, a comprehensive education programme should be carried out by both the society (bottom-up) and the government (top-bottom) to raise the level of green citizenship of the society. / published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
|
429 |
Dynamic compressive sensing: sparse recovery algorithms for streaming signals and videoAsif, Muhammad Salman 20 September 2013 (has links)
This thesis presents compressive sensing algorithms that utilize system dynamics in the sparse signal recovery process. These dynamics may arise due to a time-varying signal, streaming measurements, or an adaptive signal transform. Compressive sensing theory has shown that under certain conditions, a sparse signal can be recovered from a small number of linear, incoherent measurements. The recovery algorithms, however, for the most part are static: they focus on finding the solution for a fixed set of measurements, assuming a fixed (sparse) structure of the signal.
In this thesis, we present a suite of sparse recovery algorithms that cater to various dynamical settings. The main contributions of this research can be classified into the following two categories: 1) Efficient algorithms for fast updating of L1-norm minimization problems in dynamical settings. 2) Efficient modeling of the signal dynamics to improve the reconstruction quality; in particular, we use inter-frame motion in videos to improve their reconstruction from compressed measurements.
Dynamic L1 updating: We present homotopy-based algorithms for quickly updating the solution for various L1 problems whenever the system changes slightly. Our objective is to avoid solving an L1-norm minimization program from scratch; instead, we use information from an already solved L1 problem to quickly update the solution for a modified system. Our proposed updating schemes can incorporate time-varying signals, streaming measurements, iterative reweighting, and data-adaptive transforms. Classical signal processing methods, such as recursive least squares and the Kalman filters provide solutions for similar problems in the least squares framework, where each solution update requires a simple low-rank update. We use homotopy continuation for updating L1 problems, which requires a series of rank-one updates along the so-called homotopy path.
Dynamic models in video: We present a compressive-sensing based framework for the recovery of a video sequence from incomplete, non-adaptive measurements. We use a linear dynamical system to describe the measurements and the temporal variations of the video sequence, where adjacent images are related to each other via inter-frame motion. Our goal is to recover a quality video sequence from the available set of compressed measurements, for which we exploit the spatial structure using sparse representations of individual images in a spatial transform and the temporal structure, exhibited by dependencies among neighboring images, using inter-frame motion. We discuss two problems in this work: low-complexity video compression and accelerated dynamic MRI. Even though the processes for recording compressed measurements are quite different in these two problems, the procedure for reconstructing the videos is very similar.
|
430 |
A Collage-Based Approach to Inverse Problems for Nonlinear Systems of Partial Differential EquationsLevere, Kimberly Mary 30 March 2012 (has links)
Inverse problems occur in a wide variety of applications and are an active area of research in many disciplines. We consider inverse problems for a broad class of nonlinear systems of partial differential equations (PDEs). We develop collage-based approaches for solving inverse problems for nonlinear PDEs of elliptic, parabolic and hyperbolic type. The original collage method for solving inverse problems was developed in [29] with broad application, in particular to ordinary differential equations (ODEs). Using a consequence of Banach’s fixed point theorem, the collage theorem, one can bound the approximation error above by the so-called collage distance, which is more readily minimizable. By minimizing the collage distance the approximation error can be controlled. In the case of nonlinear PDEs we consider the weak formulation of the PDE and make use of the nonlinear Lax-Milgram representation theorem and Galerkin approximation theory in order to develop a similar upper-bound on the approximation error. Supporting background theory, including weak solution theory,is presented and example problems are solved for each type of PDE to showcase the methods in practice. Numerical techniques and considerations are discussed and results are presented. To demonstrate the practical applicability of this work, we study two real-world applications. First, we investigate a model for the migration of three
fish species through floodplain waters. A development of the mathematical model is presented and a collage-based method is applied to this model to recover the diffusion parameters. Theoretical and numerical particulars are discussed and results are presented. Finally, we investigate a model for the “Gao beam”, a nonlinear beam model that incorporates the possibility of buckling. The mathematical model is developed and the weak formulation is discussed. An inverse problem that seeks the flexural rigidity of the beam is solved and results are presented. Finally, we discuss avenues of future research arising from this work. / Natural Sciences and Engineering Research Council of Canada, Department of Mathematics & Statistics
|
Page generated in 0.1451 seconds