• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 140
  • 77
  • 46
  • 35
  • 11
  • 9
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 931
  • 366
  • 180
  • 161
  • 136
  • 128
  • 106
  • 104
  • 89
  • 88
  • 84
  • 77
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Information Processing in Two-Dimensional Cellular Automata

Cenek, Martin 01 January 2011 (has links)
Cellular automata (CA) have been widely used as idealized models of spatially-extended dynamical systems and as models of massively parallel distributed computation devices. Despite their wide range of applications and the fact that CA are capable of universal computation (under particular constraints), the full potential of these models is unrealized to-date. This is for two reasons: (1) the absence of a programming paradigm to control these models to solve a given problem and (2) the lack of understanding of how these models compute a given task. This work addresses the notion of computation in two-dimensional cellular automata. Solutions using a decentralized parallel model of computation require information processing on a global level. CA have been used to solve the so-called density (or majority) classification task that requires a system-wide coordination of cells. To better understand and challenge the ability of CA to solve problems, I define, solve, and analyze novel tasks that require solutions with global information processing mechanisms. The ability of CA to perform parallel, collective computation is attributed to the complex pattern-forming system behavior. I further develop the computational mechanics framework to study the mechanism of collective computation in two-dimensional cellular automata. I define several approaches to automatically identify the spatiotemporal structures with information content. Finally, I demonstrate why an accurate model of information processing in two-dimensional cellular automata cannot be constructed from the space-time behavior of these structures.
212

Multi-criteria decision making using reinforcement learning and its application to food, energy, and water systems (FEWS) problem

Deshpande, Aishwarya 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Multi-criteria decision making (MCDM) methods have evolved over the past several decades. In today’s world with rapidly growing industries, MCDM has proven to be significant in many application areas. In this study, a decision-making model is devised using reinforcement learning to carry out multi-criteria optimization problems. Learning automata algorithm is used to identify an optimal solution in the presence of single and multiple environments (criteria) using pareto optimality. The application of this model is also discussed, where the model provides an optimal solution to the food, energy, and water systems (FEWS) problem.
213

Minimalizace automatů s jednoduchými čítači / Minimization of Counting Automata

Turcel, Matej January 2021 (has links)
Táto práca sa zaoberá redukciou veľkosti tzv. čítačových automatov. Čítačové automaty rozširujú klasické konečné automaty o čítače s obmedzeným rozsahom hodnôt. Umožňujú tým efektívne spracovať napr. regulárne výrazy s opakovaním: a{5,10}. V tejto práci sa zaoberáme reláciou simulácie v čítačových automatoch, pomocou ktorej sme schopní zredukovať ich veľkosť. Opierame sa pritom o klasickú simuláciu v konečných automatoch, ktorú netriviálnym spôsobom rozširujeme na čítačové automaty. Kľúčovým rozdielom je nutnosť simulovať okrem stavov taktiež čítače. Za týmto účelom zavádzame nový koncept parametrizovanej relácie simulácie, a navrhujeme metódy výpočtu tejto relácie a redukcie veľkosti čítačových automatov pomocou nej. Navrhnuté metódy sú tiež implementované a je vyhodnotená ich efektivita.
214

Tight Bounds on 3-Neighbor Bootstrap Percolation

Romer, Abel 31 August 2022 (has links)
Consider infecting a subset $A_0 \subseteq V(G)$ of the vertices of a graph $G$. Let an uninfected vertex $v \in V(G)$ become infected if $|N_G(v) \cap A_0| \geq r$, for some integer $r$. Define $A_t = A_{t-1} \cup \{v \in V(G) : |N_G(v) \cap A_{t-1}| \geq r \},$ and say that the set $A_0$ is \emph{lethal} under $r$-neighbor percolation if there exists a $t$ such that $A_t = V(G)$. For a graph $G$, let $m(G,r)$ be the size of the smallest lethal set in $G$ under $r$-neighbor percolation. The problem of determining $m(G,r)$ has been extensively studied for grids $G$ of various dimensions. We define $$m(a_1, \dots, a_d, r) = m\left (\prod_{i=1}^d [a_i], r\right )$$ for ease of notation. Famously, a lower bound of $m(a_1, \dots, a_d, d) \geq \frac{\sum_{j=1}^d \prod_{i \neq j} a_i}{d}$ is given by a beautiful argument regarding the high-dimensional ``surface area" of $G = [a_1] \times \dots \times [a_d]$. While exact values of $m(G,r)$ are known in some specific cases, general results are difficult to come by. In this thesis, we introduce a novel technique for viewing $3$-neighbor lethal sets on three-dimensional grids in terms of lethal sets in two dimensions. We also provide a strategy for recursively building up large lethal sets from existing small constructions. Using these techniques, we determine the exact size of all lethal sets under 3-neighbor percolation in three-dimensional grids $[a_1] \times [a_2] \times [a_3]$, for $a_1,a_2,a_3 \geq 11$. The problem of determining $m(n,n,3)$ is discussed by Benevides, Bermond, Lesfari and Nisse in \cite{benevides:2021}. The authors determine the exact value of $m(n,n,3)$ for even $n$, and show that, for odd $n$, $$\ceil*{\frac{n^2+2n}{3}} \leq m(n,n,3) \leq \ceil*{\frac{n^2+2n}{3}} + 1.$$ We prove that $m(n,n,3) = \ceil*{\frac{n^2+2n}{3}}$ if and only if $n = 2^k-1$, for some $k >0$. Finally, we provide a construction to prove that for $a_1,a_2,a_3 \geq 12$, bounds on the minimum lethal set on the the torus $G = C_{a_1} \square C_{a_2} \square C_{a_3}$ are given by $$2 \le m(G,3) - \frac{a_1a_2 + a_2a_3 + a_3a_1 -2(a_1+a_2+a_3)}{3} \le 3.$$ / Graduate
215

Optimal Design of Variable-Stiffness Fiber-Reinforced Composites Using Cellular Automata

Setoodeh, Shahriar 21 October 2005 (has links)
The growing number of applications of composite materials in aerospace and naval structures along with advancements in manufacturing technologies demand continuous innovations in the design of composite structures. In the traditional design of composite laminates, fiber orientation angles are constant for each layer and are usually limited to 0, 90, and ±45 degrees. To fully benefit from the directional properties of composite laminates, such limitations have to be removed. The concept of variable-stiffness laminates allows the stiffness properties to vary spatially over the laminate. Through tailoring of fiber orientations and laminate thickness spatially in an optimal fashion, mechanical properties of a part can be improved. In this thesis, the optimal design of variable-stiffness fiber-reinforced composite laminates is studied using an emerging numerical engineering optimization scheme based on the cellular automata paradigm. A cellular automaton (CA) based design scheme uses local update rules for both field variables (displacements) and design variables (lay-up configuration and laminate density measure) in an iterative fashion to convergence to an optimal design. In the present work, the displacements are updated based on the principle of local equilibrium and the design variables are updated according to the optimality criteria for minimum compliance design. A closed form displacement update rule for constant thickness isotropic continua is derived, while for the general anisotropic continua with variable thickness a numeric update rule is used. Combined lay-up and topology design of variable-stiffness flat laminates is performed under the action of in-plane loads and bending loads. An optimality criteria based formulation is used to obtain local design rules for minimum compliance design subject to a volume constraint. It is shown that the design rule splits into a two step application. In the first step an optimal lay-up configuration is computed and in the second step the density measure is obtained. The spatial lay-up design problem is formulated using both fiber angles and lamination parameters as design variables. A weighted average formulation is used to handle multiple load case designs. Numerical studies investigate the performance of the proposed design methodology. The optimal lay-up configuration is independent of the lattice density with more details emerging as the density is increased. Moreover, combined topology and lay-up designs are free of checkerboard patterns. The lay-up design problem is also solved using lamination parameters instead of the fiber orientation angles. The use of lamination parameters has two key features: first, the convexity of the minimization problem guarantees a global minimum; second, for both in-plane and bending problems it limits the number of design variables to four regardless of the actual number of layers, thereby simplifying the optimization task. Moreover, it improves the convergence rate of the iterative design scheme as compared to using fiber angles as design variables. Design parametrization using lamination parameters provides a theoretically better design, however, manufacturability of the designs is not certain. The cases of general, balanced symmetric, and balanced symmetric with equal thickness layers are studied separately. The feasible domain for laminates with equal thickness layers is presented for an increasing number of layers. A restricted problem is proposed that maintains the convexity of the design space for laminates with equal thickness layers. A recursive formulation for computing fiber angles for this case is also presented. On the computational side of the effort, a parallel version of the present CA formulation is implemented on message passing multiprocessor clusters. A standard parallel implementation does not converge for an increased number of processors. Detailed analysis revealed that the convergence problem is due to a Jacobi type iteration scheme, and a pure Gauss-Seidel type iteration through a pipeline implementation completely resolved the convergence problem. Timing results giving the speedup for the pipeline implementation were obtained for up to 260 processors. This work was supported by Grant NAG-1-01105 from NASA Langley Research Center. Special thanks to our project monitor Dr. Damodar R. Ambur for his technical guidance. / Ph. D.
216

Combining Static Analysis and Dynamic Learning to Build Context Sensitive Models of Program Behavior

Liu, Zhen 10 December 2005 (has links)
This dissertation describes a family of models of program behavior, the Hybrid Push Down Automata (HPDA) that can be acquired using a combination of static analysis and dynamic learning in order to take advantage of the strengths of both. Static analysis is used to acquire a base model of all behavior defined in the binary source code. Dynamic learning from audit data is used to supplement the base model to provide a model that exactly follows the definition in the executable but that includes legal behavior determined at runtime. Our model is similar to the VPStatic model proposed by Feng, Giffin, et al., but with different assumptions and organization. Return address information extracted from the program call stack and system call information are used to build the model. Dynamic learning alone or a combination of static analysis and dynamic learning can be used to acquire the model. We have shown that a new dynamic learning algorithm based on the assumption of a single entry point and exit point for each function can yield models of increased generality and can help reduce the false positive rate. Previous approaches based on static analysis typically work only with statically linked programs. We have developed a new component-based model and learning algorithm that builds separate models for dynamic libraries used in a program allowing the models to be shared by different program models. Sharing of models reduces memory usage when several programs are monitored, promotes reuse of library models, and simplifies model maintenance when the system updates dynamic libraries. Experiments demonstrate that the prototype detection system built with the HPDA approach has a performance overhead of less than 6% and can be used with complex real-world applications. When compared to other detection systems based on analysis of operating system calls, the HPDA approach is shown to converge faster during learning, to detect attacks that escape other detection systems, and to have a lower false positive rate.
217

Chemical Applications in Techniques of Emerging Significance: Nanoparticle Transformation in Mitochondria and Relative Tautomer Populations in Cellular Automata

Bowers, Gregory Arland January 2017 (has links)
No description available.
218

EXTENDED COUPLED PROBABLISTIC TIMED AUTOMATA FOR MONITORING EATING ACTIVITIES OF ELDERLY PERSON

Muhajab, Hanan Nasser 30 November 2016 (has links)
No description available.
219

The recognition of straight line patterns by bus automatons using parallel processing /

Mellby, John Rolf January 1980 (has links)
No description available.
220

Theory and Simulations in Spatial Economics

Kyureghian, Hrachya Henrik 17 February 2000 (has links)
Chapter 2 deals with a linear city model à la Hotelling where the two firms share linear transport costs with their customers. Mill pricing and uniform delivery pricing are special limiting cases. We characterize the conditions for the existence of a pure strategy equilibrium in the two-stage location-price game. These enable us to identify the causes for non-existence in the two limiting cases. We solve for the equilibrium of a location game between the duopolists with an exogenously given price. When the two firms are constrained to locate at the same central spot, we show the nonexistence of pure strategy equilibria, conjecture the existence of mixed strategy equilibria, and show that any such possible equilibria will always yield positive expected profits. Chapter 3 provides simulations as well as theoretical analysis of potential spatial separation of heterogeneous agents operating on a two-dimensional grid space that represents a city. Heterogeneity refers to a characteristic which is also a determinant of individual valuation of land. We study spatial separation with respect to the distinguishing characteristic and investigate the details of emerging spatial patterns. Simulations suggest that the process of interaction with little trade friction goes through stages which resemble its end-state with high trade friction. Several theoretical examples exhibit a distinguishing characteristic upon which the simulations are based. They reflect some of the causes for spatial separation. Examples for the absence of spatial separation are also given. In Chapter 4 simulations, in addition to some theory, are used to investigate certain aspects of a city formation process. The model assumes two types of economic agents, workers and employers, operating on a two-dimensional grid. The agents have simple preferences, positive for the opposite type and negative for the own type in the own location. In addition, they have positive or negative preference for agglomeration in the own location. The model helps build intuition about a potentially important factor for agglomeration formation, namely, the disparity between entrepreneurial and technical skills in localities. We also determine the minimum level of positive preference for agglomeration that leads to agglomeration formation. / Ph. D.

Page generated in 0.0841 seconds