• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 14
  • 6
  • 2
  • 1
  • Tagged with
  • 86
  • 86
  • 34
  • 31
  • 27
  • 27
  • 27
  • 26
  • 26
  • 18
  • 15
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A study of optimization problems involving stochastic systems with jumps

Liu, Chunmin January 2008 (has links)
The optimization problems involving stochastic systems are often encountered in financial systems, networks design and routing, supply-chain management, actuarial science, telecommunications systems, statistical pattern recognition analysis associated with electronic commerce and medical diagnosis. / This thesis aims to develop computational methods for solving three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps. / In Chapter 1, a brief review on optimization problems involving stochastic systems with jumps is given. It is then followed by the introduction of three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps. These three stochastic optimization problems will be studied in detail in Chapters 2, 3 and 4, respectively. The literature reviews on optimization problems involving these three stochastic systems with jumps are presented in the last three sections of each of Chapters 2, 3 and 4, respectively. / In Chapter 2, an optimization problem involving nonparametric regression with jump points is considered. A two-stage method is proposed for nonparametric regression with jump points. In the first stage, we identify the rough locations of all the possible jump points of the unknown regression function. In the second stage, we map the yet to be decided jump points into pre-assigned fixed points. In this way, the time domain is divided into several sections. Then the spline function is used to approximate each section of the unknown regression function. These approximation problems are formulated and subsequently solved as optimization problems. The inverse time scaling transformation is then carried out, giving rise to an approximation to the nonparametric regression with jump points. For illustration, several examples are solved by using this method. The result obtained are highly satisfactory. / In Chapter 3, the optimization problem involving nonparametric regression with jump curves is studied. A two-stage method is presented to construct an approximating surface with jump location curve from a set of observed data which are corrupted with noise. In the first stage, we detect an estimate of the jump location curve in a surface. In the second stage, we shift the jump location curve into a row pixels or column pixels. The shifted region is then divided into two disjoint subregions by the jump location row pixels. These subregions are expanded to two overlapping expanded subregions, each of which includes the jump location row pixels. We calculate artificial values at these newly added pixels by using the observed data and then approximate the surface on each expanded subregions in which the artificial values at the pixels in the jump location row pixels for each expanded subregion. The curve with minimal distance between the two surfaces is chosen as the curve dividing the region. Subsequently, two nonoverlapping tensor product cubic spline surfaces are obtained. Then, by carrying out the inverse space scaling transformation, the two fitted smooth surfaces in the original space are obtained. For illustration, a numerical example is solved using the method proposed. / In Chapter 4, a class of stochastic optimal parameter selection problems described by linear Ito stochastic differential equations with state jumps subject to probabilistic constraints on the state is considered, where the times at which the jumps occurred as well as their heights are decision variables. We show that this constrained stochastic impulsive optimal parameter selection problem is equivalent to a deterministic impulsive optimal parameter selection problem subject to continuous state inequality constraints, where the times at which the jumps occurred as well as their heights remain as decision variables. Then we show that this constrained deterministic impulsive optimal parameter selection problem can be transformed into an equivalent constrained deterministic impulsive optimal parameter selection problem with fixed jump times. We approximate the continuous state inequality constraints by a sequence of canonical inequality constraints. This leads to a sequence of approximate deterministic impulsive optimal parameter selection problems subject to canonical inequality constraints. For each of these approximate problems, we derive the gradient formulas of the cost function and the constraint functions. On this basis, an efficient computational method is developed. For illustration, a numerical example is solved. / Finally, Chapter 5 contains some concluding remarks and suggestions for future studies.
12

Segmentação de imagens e reconstrução de modelos aplicada a estruturas ósseas

Marques, Adriano de Souza [UNESP] 31 October 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:28:33Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-10-31Bitstream added on 2014-06-13T18:34:48Z : No. of bitstreams: 1 marques_as_me_bauru.pdf: 767913 bytes, checksum: e42f98bb723da42894a9fd678bbebcab (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Na biomecânica, a computação tem se tornado uma forte aliada no estudo utilizando-se imagens, pois avanços significativos têm sido verificados devido à evolução das técnicas de aquisição de imagens médicas. Em estruturas ósseas, devido à complexidade das formas geométricas, a obtenção de modelos precisos torna-se um processo difícil, exigindo métodos computacionais igualmente complexos. Por outro lado, a computação gráfica oferece técnicas que possibilitam a adequada manipulação destas imagnes. Entre os diversos métodos existentes, o modelo de contornos ativos, também conhecido como snakes, vem sendo amplamente difundido no processo dd segmentação para extração de estruturas de interesse no contexto médico. Este trabalho utiliza o método de contornos ativos por Fluxo do Vetor Gradiente (GVF) para obtenção das matrizes do contorno de estruturas com geometrias côncavas, neste representado por seções transversais obtidas da tomografia de uma mandíbula humana. Utilizando-se as matrizes obtidas, é gerado um modelo da mandíbula em 3D aplicando-se o método de triangulação foi utilizado o pacote MATLAB®, e para obtenção do modelo tridimensional, foi utilizado o pacote ANSYS®. / In biomechanics, the computer has become a strong ally to study using the images, as significant advances have been recorded owing to technical progress of the acquisition of medical images. In bone structures due to the complexity of geometric shapes, to obtain accurate models becomes a difficult processes, requiring equally complex computational methods. On the other hand, offers computer graphics techniques that enable the appropriate handling of these images. Among the various existing methods, the model of active contours, also known as snakes, have been widely disseminated in hte process of segmentation to extract structures of interest medical. This work uses the method of active contours for Gradient Vector Flow (GVF) to obtain the matrices of the outline of structures with concave geometries, this represented by cross sections obtained from the tomography of a human jaw. Using the matrices obtained, it generated a 3D model of the jaw by applying the method of triangulation between adjacent contours. For the process of segmentation and triangulation was performed using MATLAB® application, and for obtaining three-dimensional model was performed using ANSYS® application.
13

Queer Digital Community: An Analysis of Twitter Counterpublics

Miller, Thomas Ethan 23 March 2023 (has links)
With the growing need for a sociological understanding of behavior on social media platforms, there is a desire to know how marginalized groups engage with these technologies. This study asks whether queer people on Twitter utilize the platform to create a counterpublic - a group of strangers linked by a shared oppositional discourse to the dominant public discourse. To answer this, I compare the interaction patterns and thematic content of queer tweets with a previously identified Twitter counterpublic, Black Twitter, and dominant public, liberal and conservative Twitter. To locate queer Twitter content, I developed a process that intakes a starting term speculating where a community may be and finds hashtags used most by accounts that recently tweeted with the starting term. Using the starting term "#lgbtq," I discovered that #gay and #lgbt were the most used during the observation period. I also conducted this process to find the most used hashtags for the liberal Twitter community (#voteblue and #redwave), the conservative Twitter community (#trump and #maga), and the Black Twitter community (#blackpanther and #kyrie). By analyzing levels of engagement using a negative binomial regression, I find that queer tweets are significantly more likely to receive replies than those from the other communities. Using hierarchical cluster analysis and structured topic modeling, I conduct a content analysis that reveals that a large portion (70%) of queer tweets relate to pornographic content. Through posting intimate content, these tweets express oppositional sexualities excluded in dominant publics. I claim that queer people create a counterpublic on Twitter because tweets using queer hashtags show a higher level of commentary-based communication than the other Twitter communities and develop unique thematic content distinct and oppositional to the dominant public. Future research should build upon these findings to discover other avenues of queer online community outside of this narrow band of online communication. / Master of Science / In my thesis, I ask whether queer people on Twitter create an online community. To answer this, I compare how queer people interact and what they discuss with previously identified Twitter communities. To locate queer Twitter content, I developed a process that intakes a starting term to speculate where a community may be and finds hashtags used most by accounts who recently tweeted with the starting term. Using the starting term "#lgbtq" to estimate queer Twitter content, I discovered that #gay and #lgbt were the most used during the observation period. I also conducted this process to find the most used hashtags for the liberal Twitter community (#voteblue and #redwave), the conservative Twitter community (#trump and #maga), and the Black Twitter community (#blackpanther and #kyrie). By analyzing the levels of engagement, I found that queer tweets are more likely to receive replies than those from the other communities. My content analysis revealed that a large portion (70%) of the queer tweets included pornographic content. Through posting intimate content, these tweets express sexualities that dominant communities exclude. I claim that queer people create a community on Twitter because tweets using queer hashtags show a higher level of commentary-based communication than the other Twitter communities and develop unique discussion content. However, my findings are limited to a narrow band of online communication. Future research should build upon my research to discover other avenues of queer online community.
14

Harnessing Artificial Intelligence and Computational Methods for Advanced Spectroscopy Analysis

Mousavi Masouleh, Seyed Shayan January 2024 (has links)
The emergence of advanced computational techniques and artificial intelligence has strongly impacted the materials discovery and optimization. This study focuses on applying computational methods to extract information from complex spectral systems. Three distinct tiers of information extraction from hyperspectral data are explored: integrating light data treatment with computational modeling, employing convolutional neural networks for signal reconstruction, and advancing quantification using probabilistic machine learning. In the first tier, utilizing electron energy loss spectroscopy (EELS) in conjunction with boundary element method modeling, we uncovered the broadband plasmonic properties in wrinkled gold structures and their origin. We demonstrated the link between broadband plasmonic characteristics and surface nano-features, offering insights into property tunability. To benefit the broader microscopy community, in the second tier, we developed EELSpecNet, a Python script based on convolutional neural networks. EELSpecNet reconstructs signals to retrieve details that were obscured by various signal artifacts. EELSpecNet was benchmarked for near-zero-loss EELS, a challenging signal that contains crucial phononic and plasmonic information. The results clearly show that this neural network approach surpasses conventional Bayesian methods in deconvolution, particularly in terms of information retrieval, signal fidelity, and noise reduction. The final tier of this research introduces an innovative approach to spectral analysis and quantification using probabilistic machine learning methods. By employing the Markov Chain Monte Carlo sampling and Gaussian Process Regression models, this tool facilitates spectral quantifications, provides comprehensive uncertainty analysis, reduces human biases in the decision-making and model selection processes. This tool is particularly useful for in-operando X-ray diffraction data analysis, a key technique for examining battery materials. This method effectively disentangles overlapping peaks, quantifies each peak, and tracks their evolution. Tested on both synthetic and real experimental data, the tool demonstrated its efficacy and versatility. Given its broad adaptability, this method is suitable for a variety of spectroscopy techniques. / Thesis / Doctor of Science (PhD)
15

Výpočetní studie krátkých peptidů a miniproteinů a vliv prostředí na jejich konformaci. / Computational study of short peptides and miniproteins in different environments

Vymětal, Jiří January 2014 (has links)
Apart from biological functions, peptides are of uttermost importance as models for un- folded, denatured or disordered state of the proteins. Similarly, miniproteins such as Trp-cage have proven their role as simple models of both experimental and theoretical studies of protein folding. Molecular dynamics and computer simulations can provide an unique insight on processes at atomic level. However, simulations of peptides and minipro- teins face two cardinal problems-inaccuracy of force fields and inadequate conformation sampling. Both principal issues were tackled in this theses. Firstly, the differences in several force field for peptides and proteins were questioned. We demonstrated the inability of the used force fields to predict consistently intrinsic conformational preferences of individual amino acids in the form of dipeptides and the source of the discrepancies was traced. In order to shed light on the nature of conformational ensembles under various denatur- ing conditions, we studied host-guest AAXAA peptides. The simulations revealed that thermal and chemical denaturation by urea produces qualitatively different ensembles and shift propensities of individual amino acids to particular conformers. The problem of insufficient conformation sampling was dealt by introducing gyration- and...
16

Eficácia em problemas inversos: generalização do algoritmo de recozimento simulado e função de regularização aplicados a tomografia de impedância elétrica e ao espectro de raios X / Efficiency in inverse problems: generalization of simulated annealing algorithm and regularization function applied to electrical impedance tomography and X-rays spectrum

Menin, Olavo Henrique 08 December 2014 (has links)
A modelagem de processos em física e engenharia frequentemente resulta em problemas inversos. Em geral, esses problemas apresentam difícil resolução, pois são classificados como mal-postos. Resolvê-los, tratando-os como problemas de otimização, requer a minimização de uma função objetivo, que mede a discrepância entre os dados experimentais e os obtidos pelo modelo teórico, somada a uma função de regularização. Na maioria dos problemas práticos, essa função objetivo é não-convexa e requer o uso de métodos de otimização estocásticos. Dentre eles, tem-se o algoritmo de recozimento simulado (Simulated Annealing), que é baseado em três pilares: i) distribuição de visitação no espaço de soluções; ii) critério de aceitação; e iii) controle da estocasticidade do processo. Aqui, propomos uma nova generalização do algoritmo de recozimento simulado e da função de regularização. No algoritmo de otimização, generalizamos o cronograma de resfriamento, que usualmente são considerados algébricos ou logarítmicos, e o critério de Metropolis. Com relação à função de regularização, unificamos as versões mais utilizadas, em uma única fórmula. O parâmetro de controle dessa generalização permite transitar continuamente entre as regularizações de Tikhonov e entrópica. Por meio de experimentos numéricos, aplicamos nosso algoritmo na resolução de dois importantes problemas inversos na área de Física Médica: a determinação do espectro de um feixe de raios X, a partir de sua curva de atenuação, e a reconstrução da imagem na tomografia de impedância elétrica. Os resultados mostram que o algoritmo de otimização proposto é eficiente e apresenta um regime ótimo de parâmetros, relacionados à divergência do segundo momento da distribuição de visitação. / Modeling of processes in Physics and Engineering frequently yields inverse problems. These problems are normally difficult to be solved since they are classified as ill-posed. Solving them as optimization problems require the minimization of an objective function which measures the difference between experimental and theoretical data, added to a regularization function. For most of practical inverse problems, this objective function is non-convex and needs a stochastic optimization method. Among them, we have Simulated Annealing algorithm, which is based on three fundamentals: i) visitation distribution in the search space; ii) acceptance criterium; and iii) control of process stochasticity. Here, we propose a new generalization of simulated annealing algorithm and of the regularization function. On the optimization algorithm, we have generalized both the cooling schedule, which usually is algebric or logarithmic, and the Metropolis acceptance criterium. Regarding to regularization function, we have unified the most used versions in an unique equation. The generalization control parameter allows exchange continuously between the Tikhonov and entropic regularization. Through numerical experiments, we applied our algorithm to solve two important inverse problems in Medical Physics: determination of a beam X-rays spectrum from its attenuation curve and the image reconstruction of electrical impedance tomography. Results show that the proposed algorithm is efficient and presents an optimal arrangement of parameters, associated to the divergence of the visitation distribution.
17

Simulation and modeling of flow field around a horizontal axis wind turbine (HAWT) using RANS method

Unknown Date (has links)
The principal objective of the proposed CFD analysis is to investigate the flow field around a horizontal axis wind turbine rotor and calculate the turbine's power. A full three dimensional computational fluid dynamics method based on Reynolds Averaged Navier Stokes approach was used in this study. The wind turbine has three blades and a rotor diameter of six meters. One third of the wind turbine rotor was modeled by means of 120o periodicity in a moving reference frame system. The power coefficient curve obtained from the CFD results is compared with experimental data obtained by NREL Phase VI rotor experiment. The numerical result for the power coefficient curve shows close agreement with the experimental data. The simulation results include the velocity distribution, pressure distribution along the flow direction, turbulent wake behind the wind turbine, and the turbine's power. The discussion will also include the effect of wind speed on turbine's power. / by Armen Sargsyan. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
18

Um ambiente computacional para um teste de significância bayesiano / An computational environment for a bayesian significance test

Silvio Rodrigues de Faria Junior 09 October 2006 (has links)
Em 1999, Pereira e Stern [Pereira and Stern, 1999] propuseram o Full Baye- sian Significance Test (FBST), ou Teste de Significancia Completamente Bayesiano, especialmente desenhado para fornecer um valor de evidencia dando suporte a uma hip otese precisa H. Apesar de possuir boas propriedades conceituais e poder tratar virtual- mente quaisquer classes de hip oteses precisas em modelos param etricos, a difus ao deste m etodo na comunidade cient fica tem sido fortemente limitada pela ausencia de um ambiente integrado onde o pesquisador possa formular e implementar o teste de seu interesse. O objetivo deste trabalho e apresentar uma proposta de implementa c ao de um ambiente integrado para o FBST, que seja suficientemente flex vel para tratar uma grande classe de problemas. Como estudo de caso, apresentamos a formula c ao do FBST para um problema cl assico em gen etica populacional, o Equil brio de Hardy-Weinberg / In 1999, Pereira and Stern [Pereira and Stern, 1999] introduced the Full Bayesian Significance Test (FBST), developed to give a value of evidence for a precise hypothesis H. Despite having good conceptual properties and being able to dealing with virtually any classes of precise hypotheses under parametric models, the FBST did not achieve a large difusion among the academic community due to the abscence of an computational environment where the researcher can define and assess the evidence for hypothesis under investigation. In this work we propose an implementation of an flexible computatio- nal environment for FBST and show a case study in a classical problem in population genetics, the Hardy-Weinberg Equilibrium Law.
19

Um ambiente computacional para um teste de significância bayesiano / An computational environment for a bayesian significance test

Faria Junior, Silvio Rodrigues de 09 October 2006 (has links)
Em 1999, Pereira e Stern [Pereira and Stern, 1999] propuseram o Full Baye- sian Significance Test (FBST), ou Teste de Significancia Completamente Bayesiano, especialmente desenhado para fornecer um valor de evidencia dando suporte a uma hip otese precisa H. Apesar de possuir boas propriedades conceituais e poder tratar virtual- mente quaisquer classes de hip oteses precisas em modelos param etricos, a difus ao deste m etodo na comunidade cient fica tem sido fortemente limitada pela ausencia de um ambiente integrado onde o pesquisador possa formular e implementar o teste de seu interesse. O objetivo deste trabalho e apresentar uma proposta de implementa c ao de um ambiente integrado para o FBST, que seja suficientemente flex vel para tratar uma grande classe de problemas. Como estudo de caso, apresentamos a formula c ao do FBST para um problema cl assico em gen etica populacional, o Equil brio de Hardy-Weinberg / In 1999, Pereira and Stern [Pereira and Stern, 1999] introduced the Full Bayesian Significance Test (FBST), developed to give a value of evidence for a precise hypothesis H. Despite having good conceptual properties and being able to dealing with virtually any classes of precise hypotheses under parametric models, the FBST did not achieve a large difusion among the academic community due to the abscence of an computational environment where the researcher can define and assess the evidence for hypothesis under investigation. In this work we propose an implementation of an flexible computatio- nal environment for FBST and show a case study in a classical problem in population genetics, the Hardy-Weinberg Equilibrium Law.
20

Mathematical Analysis of Some Partial Differential Equations with Applications

Chen, Kewang 01 January 2019 (has links)
In the first part of this dissertation, we produce and study a generalized mathematical model of solid combustion. Our generalized model encompasses two special cases from the literature: a case of negligible heat diffusion in the product, for example, when the burnt product is a foam-like substance; and another case in which diffusivities in the reactant and product are assumed equal. In addition to that, our model pinpoints the dynamics in a range of settings, in which the diffusivity ratio between the burned and unburned materials varies between 0 and 1. The dynamics of temperature distribution and interfacial front propagation in this generalized solid combustion model are studied through both asymptotic and numerical analyses. For asymptotic analysis, we first analyze the linear instability of a basic solution to the generalized model. We then focus on the weakly nonlinear case where a small perturbation of a neutrally stable parameter is taken so that the linearized problem is marginally unstable. Multiple scale expansion method is used to obtain an asymptotic solution for large time by modulating the most linearly unstable mode. On the other hand, we integrate numerically the exact problem by the Crank-Nicolson method. Since the numerical solutions are very sensitive to the derivative interfacial jump condition, we integrate the partial differential equation to obtain an integral-differential equation as an alternative condition. The result system of nonlinear algebraic equations is then solved by the Newton’s method, taking advantage of the sparse structure of the Jacobian matrix. By a comparison of our asymptotic and numerical solutions, we show that our asymptotic solution captures the marginally unstable behaviors of the solution for a range of model parameters. Using the numerical solutions, we also delineate the role of the diffusivity ratio between the burned and unburned materials. We find that for a representative set of this parameter values, the solution is stabilized by increasing the temperature ratio between the temperature of the fresh mixture and the adiabatic temperature of the combustion products. This trend is quite linear when a parameter related to the activation energy is close to the stability threshold. Farther from this threshold, the behavior is more nonlinear as expected. Finally, for small values of the temperature ratio, we find that the solution is stabilized by increasing the diffusivity ratio. This stabilizing effect does not persist as the temperature ratio increases. Competing effects produce a “cross-over” phenomenon when the temperature ratio increases beyond about 0.2. In the second part, we study the existence and decay rate of a transmission problem for the plate vibration equation with a memory condition on one part of the boundary. From the physical point of view, the memory effect described by our integral boundary condition can be caused by the interaction of our domain with another viscoelastic element on one part of the boundary. In fact, the three different boundary conditions in our problem formulation imply that our domain is composed of two different materials with one condition imposed on the interface and two other conditions on the inner and outer boundaries, respectively. These transmission problems are interesting not only from the point of view of PDE general theory, but also due to their application in mechanics. For our mathematical analysis, we first prove the global existence of weak solution by using Faedo-Galerkin’s method and compactness arguments. Then, without imposing zero initial conditions on one part of the boundary, two explicit decay rate results are established under two different assumptions of the resolvent kernels. Both of these decay results allow a wider class of relaxation functions and initial data, and thus generalize some previous results existing in the literature.

Page generated in 0.1122 seconds