691 |
Analyse des Entscheidungsverhaltens landwirtschaftlicher Unternehmer: Anwendung von Discrete Choice Experimenten in den Bereichen Tierwohl, Weidehaltung und Ackerbau / Analysis of farmers decision behaviour: Application of discrete choice experiments in different agricultural decision situationsDanne, Michael 07 May 2018 (has links)
No description available.
|
692 |
[en] REPRESENTATIONS OF TRIANGLE GROUPS IN COMPLEX HYPERBOLIC / [pt] REPRESENTAÇÕES DE GRUPOS TRIANGULARES EM GEOMETRIA HIPERBÓLICA COMPLEXALUIS FERNANDO CROCCO AFONSO 13 November 2003 (has links)
[pt] O principal objetivo deste trabalho é o estudo de
representações que preservam tipo rho:Gamma - PU(2,1) de
grupos triangulares Gamma no grupo de isometrias
holomorfas
do espaço hiperbólico complexo de dimensão dois H2C. O
grupo triangular Gamma(p,q,r) é o grupo gerado por
reflexões nos lados de um triângulo geodésico, com
ângulos pi/p, pi/q e pi/r, no plano hiperbólico. Neste trabalho,
nossas atenções são voltadas para os grupos Gamma
(4,4,infinito) e Gamma(4,infinito,infinito).
Demonstramos,
entre outros resultados: Para cada caso, existe um
caminho
contínuo de representações rho_t que contém todas as
representações que preservam tipo de Gamma em PU(2,1).
Portanto, isto nos dá, em cada caso, uma descrição
completa
do espaço de representações de Gamma em PU(2,1). Para
cada
caso, existe um intervalo fechado J tal que rho_t é uma
representação discreta e fiel se, e somente se, t
pertence a
J. Em cada caso, existe, na fronteira do espaço de
deformações, uma representação com elementos parabólicos
acidentais. Para demonstrar estes resultados, construímos
parametrizações especiais de triângulos em H2C.
Construímos poliedros fundamentais para os grupos e
utilizamos uma variante do Teorema do Poliedro de
Poincaré. / [en] The main aim of this work is to study type-preserving
representations p: gamma PU(2, 1) of triangle groups _ in
the group of holomorphic isometries of the twodimensional
complex hyperbolic space H2C. The triangle group gamma(p,
q, r)
is the group generated by reflections in the sides of a
geodesic triangle having angles pi/p, pi/q and pi/r. We
focus
our attention on the groups gamma(4,4, infinit) and gamma
(4,infinit, infinit).
Among other results, we prove that for each case:
1. There is a continuous path of representations pt which
contains all type-preserving representations of gamma in PU
(2,1) up to conjugation by isometries. This gives us a
complete description of the representation space of gamma
in PU(2,1). 2. There is a closed interval J such that pt is
a
discrete and faithful representation if and only if t
belongs J.
3. On the boundary of the representation space there is a
representation with accidental parabolic elements. To prove
these results we give special parametrizations of triangles
in H2C. We also build fundamental polyhedra for the groups
and use a kind of Poincares Polyhedron Theorem.
|
693 |
Geometry of actions, expanders and warped conesVigolo, Federico January 2018 (has links)
In this thesis we introduce a notion of graphs approximating actions of finitely generated groups on metric and measure spaces. We systematically investigate expansion properties of said graphs and we prove that a sequence of graphs approximating a fixed action ρ forms a family of expanders if and only if ρ is expanding in measure. This enables us to rely on a number of known results to construct numerous new families of expander (and superexpander) graphs. Proceeding in our investigation, we show that the graphs approximating an action are uniformly quasi-isometric to the level sets of the associated warped cone. The existence of such a relation between approximating graphs and warped cones has twofold advantages: on the one hand it implies that warped cones arising from actions that are expanding in measure coarsely contain families of expanders, on the other hand it provides a geometric model for the approximating graphs allowing us to study the geometry of the expander thus obtained. The rest of the work is devoted to the study of the coarse geometry of warped cones (and approximating graphs). We do so in order to prove rigidity results which allow us to prove that our construction is flexible enough to produce a number of non coarsely equivalent new families of expanders. As a by-product, we also show that some of these expanders enjoy some rather peculiar geometric properties, e.g. we can construct expanders that are coarsely simply connected.
|
694 |
Discrete gravitational approaches to cosmologyLiu, Rex Gerry January 2015 (has links)
Exact solutions to the Einstein field equations are notoriously difficult to find. Most known solutions describe systems with unrealistically high degrees of symmetry. A notable example is the FLRW metric underlying modern cosmology: the universe is assumed to be perfectly homogeneous and isotropic, but in the late universe, this is only true on average and only at large scales. Where an exact solution is not available, discrete gravitational approaches can approximate the system instead. This thesis investigates several cosmological systems using two distinct discrete approaches. Closed, flat, and open ‘lattice universes’ are first considered where matter is distributed as a regular lattice of identical point masses in constant-time hypersurfaces. Lindquist and Wheeler’s Schwarzschild–cell method is applied where the lattice cell around each mass is approximated by a perfectly spherical cell with Schwarzschild space–time inside. The resulting dynamics and cosmological redshifts closely resemble those of the dust-filled FLRW universes, but with certain differences in redshift behaviour attributable to the lattice universe’s lumpiness. The application of Regge calculus to cosmology is considered next. We focus exclusively on the closed models developed by Collins, Williams, and Brewin. Their approach is first applied to a universe where an exact solution is already well-established, the vacuum Λ-FLRW model. The resulting models are found to closely reproduce the dynamics of the continuum model being approximated, though certain constraints on the applicability of the approach are also uncovered. Then using this knowledge, we next model the closed lattice universe. The resulting evolution closely resembles that of the closed dust-filled FLRW universe. Constraints on the placement of the masses in the Regge skeleton are also uncovered. Finally, a ‘lattice universe’ with one perturbed mass is modelled. The evolution is still stable and similar to that of the unperturbed model. The thesis concludes by discussing possible extensions of our work.
|
695 |
Contributions to Collective Dynamical Clustering-Modeling of Discrete Time SeriesWang, Chiying 27 April 2016 (has links)
The analysis of sequential data is important in business, science, and engineering, for tasks such as signal processing, user behavior mining, and commercial transactions analysis. In this dissertation, we build upon the Collective Dynamical Modeling and Clustering (CDMC) framework for discrete time series modeling, by making contributions to clustering initialization, dynamical modeling, and scaling.
We first propose a modified Dynamic Time Warping (DTW) approach for clustering initialization within CDMC. The proposed approach provides DTW metrics that penalize deviations of the warping path from the path of constant slope. This reduces over-warping, while retaining the efficiency advantages of global constraint approaches, and without relying on domain dependent constraints.
Second, we investigate the use of semi-Markov chains as dynamical models of temporal sequences in which state changes occur infrequently. Semi-Markov chains allow explicitly specifying the distribution of state visit durations. This makes them superior to traditional Markov chains, which implicitly assume an exponential state duration distribution.
Third, we consider convergence properties of the CDMC framework. We establish convergence by viewing CDMC from an Expectation Maximization (EM) perspective. We investigate the effect on the time to convergence of our efficient DTW-based initialization technique and selected dynamical models. We also explore the convergence implications of various stopping criteria.
Fourth, we consider scaling up CDMC to process big data, using Storm, an open source distributed real-time computation system that supports batch and distributed data processing.
We performed experimental evaluation on human sleep data and on user web navigation data. Our results demonstrate the superiority of the strategies introduced in this dissertation over state-of-the-art techniques in terms of modeling quality and efficiency.
|
696 |
Um método para quantificar o estoque em processo à luz da simulação computacional e da análise multicritérioPergher, Isaac 16 March 2011 (has links)
Submitted by Mariana Dornelles Vargas (marianadv) on 2015-03-17T19:40:51Z
No. of bitstreams: 1
metodo_quantificar.pdf: 1038017 bytes, checksum: 54f78fa2753a1dfe9d797dc88d1ed31e (MD5) / Made available in DSpace on 2015-03-17T19:40:51Z (GMT). No. of bitstreams: 1
metodo_quantificar.pdf: 1038017 bytes, checksum: 54f78fa2753a1dfe9d797dc88d1ed31e (MD5)
Previous issue date: 2011-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / PROSUP - Programa de Suporte à Pós-Gradução de Instituições de Ensino Particulares / Em ambientes produtivos intermitentes que operam na condição ?produzir para estoque?, com fluxo convergente, a possibilidade de constituir estoques em processo (WIP), de produtos prontos, ou matérias-primas pode aumentar o grau de complexidade da gestão das operações e a necessidade de utilizar procedimentos de controle distintos para cada tipo de estoque. Ao focar no alinhamento da gestão dos estoques aos planos de demanda e capacidade, considerando um ambiente produtivo que emprega a abordagem Conwip, a presente pesquisa tem por finalidade propor um método estruturado que possibilite quantificar o nível de WIP do sistema produtivo, a partir da Simulação por Eventos Discretos e da técnica de Apoio Multicritério ELECTRE TRI. Esta pesquisa tem o intuito de contribuir com a geração de informações que subsidiem a tomada de decisão concernente à escolha de uma configuração de cenário que aluda a um nível de estoque em processo e de produtos prontos considerando o mix de produção sob estudo. Fundamentado na proposta desenvolvida nesta dissertação, o Método de Pesquisa pode ser caracterizado quanto aos procedimentos técnicos, pelo uso da Simulação Computacional e relativo à natureza dos dados, destaca-se o da abordagem essencialmente quantitativa, ou Pesquisa Quantitativa. O método proposto foi avaliado, quanto a sua estrutura e proposta, por especialistas das disciplinas de Simulação, Gestão de Sistemas Produtivos e Métodos Multicritério à Decisão. Uma aplicação do método em um sistema produtivo real também é apresentada / In intermittent productive systems that operate in the condition 'make to stock' with convergent flow, the possibility of generate work in process (WIP), finish good products or raw materials inventories can increase the degree of complexity of the management in the operations and the need of using procedures of different control for each stock type. Focusing in the alignment of the stocks to the demand and capacity plans, considering a productive system which uses the Conwip approach, this research describes a structured method that aims to quantify the level of WIP in the productive system, applying the Events Discrete Simulation and the technique nominated ELECTRE TRI. This research intends to contribute with the generation of information for decision support regarding the choice of work in process and finished goods stock levels, considering the production mix studied. Based in the proposal developed in this work, the Method of Research can be characterized, regarding the technical procedures, as Computational Simulation, and regarding the nature of the data, as Quantitative Research. The method proposed in this research was evaluated for specialists in the disciplines of Simulation, Administration of Production Systems and Multicriteria Decision Analysis. An application of the method in a real productive system is also presented.
|
697 |
A New Insight into Data Requirements Between Discrete Event Simulation and Industry 4.0 : A simulation-based case study in the automotive industry supporting operational decisionsMirzaie Shra, Afroz January 2019 (has links)
Current industrial companies are highly pressured by growing competitiveness and globalization, while striving for increased production effectiveness. Meanwhile, flustered markets and amplified customer demands are causing manufacturers to shift strategy. Hence, international companies are challenged to pursue changes, in order to continue being competitive on global markets. Consequently, a new industrial revolution has taken place, introduced as Industry 4.0. This new concept incorporates organizational improvement and digitalization of current information and data flows. Accomplished by data from embedded systems through connected machines, devices and humans into a combined interface. Thus, companies are given possibilities to improve current production systems, simultaneously saving operational costs and minimizing insufficient production development. Smart Factories, being the foundation of Industry 4.0 results in making more accurate and precise operational decisions from abilities to test industrial changes in a virtual world before real-life implementation. However, in order to assure these functions as intended, enormous amount of data must be collected, analysed and evaluated. The indicated data will aid companies to make more self-aware and automated decisions, resulting in increased effectiveness in production. Thus, the concept will clearly change how operational decisions are made today. Nowadays, Discrete Event Simulation is a commonly applied tool founded on specific data requirements as operational changes can be tested in virtual settings. Accordingly, it is believed that simulation can aid companies that are striving for implementing Industry 4.0. As a result, data requirements between Discrete Event Simulation and Industry 4.0 needs to be established, while detecting the current data gap in operational context. Hence, the purpose of this thesis is to analyse the data requirements of Discrete Event Simulation and Industry 4.0 for improving operational decisions of production systems. In order to justify the purpose, the following research questions has been stated: RQ1: What are the data challenges in existing production systems? RQ2: What data is required for implementing Industry 4.0 in production systems? RQ3: How can data requirements from Discrete Event Simulation benefit operational decisions when implementing Industry 4.0? The research questions were answered by conducting a case study, in collaboration with Scania CV AB. The case study performed observations, interviews and other relevant data collection to accomplish the purpose. In parallel, a literature review focusing on data requirements for operational decisions was compared to the empirical findings. The analysis identified the current data gap in existing production systems, in correlation to Industry 4.0, affecting the accuracy of operational decisions. In addition, it was shown that simulation can undoubtedly give positive outcome for adaptation of Industry 4.0, and a clear insight on data requirements.
|
698 |
Modelling framework for assessing nuclear regulatory effectivenessLavarenne, Jean January 2018 (has links)
This thesis participates to the effort launched after the Fukushima-Daiichi disaster to improve the robustness of national institutions involved in nuclear safety because of the role that the failing nuclear regulator had in the accident. The driving idea is to investigate how engineering techniques used in high-risk industries can be applied to institutions involved in nuclear safety to improve their robustness. The thesis focuses specifically on the Office for Nuclear Regulation (ONR), the British nuclear regulator, and its process for structured inspections. The first part of the thesis demonstrates that the hazard and operability (HAZOP) technique, used in the nuclear industry to identify hazards associated with an activity, can be adapted to qualitatively assess the robustness of organisational processes. The HAZOP method was applied to the ONR inspection process and led to the identification of five significant failures or errors. These are: failure to focus on an area/topic deserving regulatory attention; failure to evaluate an area/topic of interest; failure to identify a non-compliance; failure to identify the underlying issue, its full extent and/or safety significance and failure to adequately share inspection findings. In addition, the study identified the main causal chains leading to each failure. The safeguards of the process, i.e. the mechanisms in place to prevent, detect, resolve and mitigate possible failures, were then analysed to assess the robustness of the inspection process. The principal safeguard found is the superintending inspector who performs reviews of inspection reports and debriefs inspectors after inspections. It was concluded that the inspection process is robust provided recruitment and training excellence. However, given the predominant role of the superintending inspector, the robustness of the process could be improved by increasing the diversity of safeguards. Finally, suggestions for improvement were made such as establishing a formal handover procedure between former and new site inspectors, formalising and generalising the shadowing scheme between inspectors and setting minimum standards for inspection debriefs. These results were shared with ONR, which had reached the same conclusions independently, thus validating the new application for the HAZOP method. The second part of the thesis demonstrates that computational modelling techniques can be used to build digital twins of institutions involved in safety which can then be used to assess their effectiveness. The knowledge learned thanks to the HAZOP study was used in association with computational modelling techniques to build a digital twin of the ONR and its structural inspection process along with a simple model of a nuclear plant. The model was validated using the face-validity and predictive validation processes. They respectively involved an experienced ONR inspector checking the validity of the model’s procedures and decision-making processes and comparing the model’s output for oversight work done to data provided by the ONR. The effectiveness of the ONR was then evaluated using a scenario where a hypothetical, newly discovered phenomenon threatens the integrity of the plant, with ONR inspectors gradually learning and sharing new information about it. Monte-Carlo simulation was used to estimate the cost of regulatory oversight and the probability that the ONR model detects and resolves the issue introduced before it causes an accident. Different arrangements were tested and in particular with a superintending inspector reviewing inspection reports and a formal information sharing process. For this scenario, these two improvements were found to have a similar impact on the success probability. However, the former achieves it for only half the cost.
|
699 |
Robust logo watermarkingBarr, Mohammad January 2018 (has links)
Digital image watermarking is used to protect the copyright of digital images. In this thesis, a novel blind logo image watermarking technique for RGB images is proposed. The proposed technique exploits the error correction capabilities of the Human Visual System (HVS). It embeds two different watermarks in the wavelet/multiwavelet domains. The two watermarks are embedded in different sub-bands, are orthogonal, and serve different purposes. One is a high capacity multi-bit watermark used to embed the logo, and the other is a 1-bit watermark which is used for the detection and reversal of geometrical attacks. The two watermarks are both embedded using a spread spectrum approach, based on a pseudo-random noise (PN) sequence and a unique secret key. Robustness against geometric attacks such as Rotation, Scaling, and Translation (RST) is achieved by embedding the 1-bit watermark in the Wavelet Transform Modulus Maxima (WTMM) coefficients of the wavelet transform. Unlike normal wavelet coefficients, WTMM coefficients are shift invariant, and this important property is used to facilitate the detection and reversal of RST attacks. The experimental results show that the proposed watermarking technique has better distortion parameter detection capabilities, and compares favourably against existing techniques in terms of robustness against geometrical attacks such as rotation, scaling, and translation.
|
700 |
Estabilidade estrutural aplicada no contexto LDEMGasparotto, Bruno Grebin January 2017 (has links)
A demanda por estruturas mais leves implica num ganho em economia, porém o aumento de esbeltez da estrutura pode tornar ela susceptível a instabilidade frente a tensões compressivas estáticas ou dinâmicas. A instabilidade acontece em várias escalas da estrutura analisada e pode interagir com outras formas de colapso como a propagação instável de fissuras, problema governado pela mecânica da fratura, pela plastificacão do material, ou por uma combinação dos efeitos citados. Neste contexto, no presente trabalho, se explora a capacidade do método dos elementos discretizados por barras (LDEM) na simulação de problemas de instabilidade estática e dinâmica devido as tensões de compressão. Este método permite simular o sólido como um arranjo de barras com rigidez equivalente ao contínuo que se quer representar. Leis constitutivas não lineares permitem modelar ruptura de forma simples. A equação de movimento resultante da discretização permite formular uma equação de movimento desacoplada que pode ser integrada no domínio do tempo com um método explícito (Método das Diferencias Finitas Centrais). O fato das barras serem rotuladas nos seus extremos e a solução do problema ser obtida de forma incremental permite capturar problemas com não linearidade geométrica, entre eles a instabilidade estrutural frente a tensões compressivas. Como último exemplo se realiza a análise de um painel sanduiche por flexão em três pontos, que é composto por um núcleo de poliuretano, com duas lâminas externas de material compósito, neste caso a instabilidade estrutural está associada a flambagem da camada da lâmina comprimida. Finalmente a potencialidade da metodologia de análise utilizada é discutida. / The demand for lighter structures implies a gain in economy, but the increase in slenderness of the structure may make it susceptible to instability against static or dynamic compressive stresses. Instability occurs at various scales of the analyzed structure and may interact with other forms of collapse such as unstable crack propagation, problem governed by fracture mechanics, plastification of the material, or a combination of the cited effects. In this context, in the present work, we explore the ability of the discrete elements methods by bars (LDEM) in the simulation of problems of static and dynamic instability due to the compression stresses. This method allows to simulate the solid as an arrangement of bars with rigidity equivalent to the continuum that one wants to represent. Constitutive non-linear laws allow simple modeling of rupture. The equation of motion resulting from the discretization allows us to formulate a decoupled motion equation that can be integrated in the time domain with an explicit method (Central Finite Differences Method). The fact that the bars are labeled at their ends and the solution of the problem is obtained in an incremental way allows to capture problems with geometric non-linearity, among them the structural instability against compressive tensions. The last example, the analysis of a sandwich panel by three-point bending, which is composed of a polyurethane core, with two external blades of composite material, in this case the structural instability is associated with buckling of the layer of the compressed blade . Finally, the potential of the analysis methodology is discussed.
|
Page generated in 0.0581 seconds