• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1988
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 28
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4312
  • 1286
  • 517
  • 516
  • 464
  • 330
  • 315
  • 306
  • 296
  • 291
  • 282
  • 274
  • 271
  • 260
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Relaxation methods for simulating large power systems

Raisuddin, K. B. M. January 1989 (has links)
No description available.
222

Adapting the limited memory of microcomputers to solve large scale manufacturing problems

Connolly, Michael January 1983 (has links)
No description available.
223

On zeros of cubic L-functions

Xia, Honggang 03 August 2006 (has links)
No description available.
224

An Integrated Test Plan for an Advanced Very Large Scale Integrated Circuit Design Group

Didden, William S. 01 January 1984 (has links) (PDF)
VLSI testing poses a number of problems which includes the selection of test techniques, the determination of acceptable fault coverage levels, and test vector generation. Available device test techniques are examined and compared. Design rules should be employed to assure the design is testable. Logic simulation systems and available test utilities are compared. The various methods of test vector generation are also examined. The selection criteria for test techniques are identified. A table of proposed design rules is included. Testability measurement utilities can be used to statistically predict the test generation effort. Field reject rates and fault coverage are statistically related. Acceptable field reject rates can be achieved with less than full test vector fault coverage. The methods and techniques which are examined form the basis of the recommended integrated test plan. The methods of automatic test vector generation are relatively primitive but are improving.
225

Hyperplane Arrangements with Large Average Diameter

Xie, Feng 08 1900 (has links)
<p> This thesis deals with combinatorial properties of hyperplane arrangements. In particular, we address a conjecture of Deza, Terlaky and Zinchenko stating that the largest possible average diameter of a bounded cell of a simple hyperplane arrangement is not greater than the dimension. We prove that this conjecture is asymptotically tight in fixed dimension by constructing a family of hyperplane arrangements containing mostly cubical cells. The relationship with a result of Dedieu, Malajovich and Shub, the conjecture of Hirsch, and a result of Haimovich are presented.</p> <p> We give the exact value of the largest possible average diameter for all simple arrangements in dimension two, for arrangements having at most the dimension plus two hyperplanes, and for arrangements having six hyperplanes in dimension three. In dimension three, we strengthen the lower and upper bounds for the largest possible average diameter of a bounded cell of a simple hyperplane arrangements.</p> <p> Namely, let ΔA(n, d) denote the largest possible average diameter of a bounded cell of a simple arrangement defined by n hyperplanes in dimension d. We show that • ΔA(n, 2) = 2[n/2] / (n-1)(n-2) for n ≥ 3, • ΔA(d + 2, d) = 2d/d+1, • ΔA(6, 3) = 2, • 3 - 6/n-1 + 6([n/2]-2) / (n-1)(n-2)(n-3) ≤ ΔA(n, 3) ≤ 3 + 4(2n^2-16n+21) / 3(n-1)(n-2)(n-3) • ΔA (n, d) ≥ 1 + (d-1)(n-d d)+(n-d)(n-d-1) for n ≥ 2d. We also address another conjecture of Deza, Terlaky and Zinchenko stating that the minimum number Φ0A~(n, d) of facets belonging to exactly one bounded cell of a simple arrangement defined by n hyperplanes in dimension d is at least d (n-2 d-1). We show that • Φ0A(n, 2) = 2(n - 1) for n ≥ 4, • Φ0A~(n, 3) ≥ n(n-2)/3 +2 for n ≥ 5. We present theoretical frameworks, including oriented matroids, and computational tools to check by complete enumeration the open conjectures for small instances. Preliminary computational results are given.</p> / Thesis / Master of Science (MSc)
226

Learning-Based Pareto Optimal Control of Large-Scale Systems with Unknown Slow Dynamics

Tajik Hesarkuchak, Saeed 10 June 2024 (has links)
We develop a data-driven approach to Pareto optimal control of large-scale systems, where decision makers know only their local dynamics. Using reinforcement learning, we design a control strategy that optimally balances multiple objectives. The proposed method achieves near-optimal performance and scales well with the total dimension of the system. Experimental results demonstrate the effectiveness of our approach in managing multi-area power systems. / Master of Science / We have developed a new way to manage complex systems—like power networks—where each part only knows about its own behavior. By using a type of artificial intelligence known as reinforcement learning, we've designed a method that can handle multiple goals at once, ensuring that the entire system remains stable and works efficiently, no matter how large it gets. Our tests show that this method is particularly effective in coordinating different sections of power systems to work together smoothly. This could lead to more efficient and reliable power distribution in large networks.
227

Space to Think: Sensemaking and Large, High-Resolution Displays

Andrews, Christopher 14 September 2011 (has links)
Display technology has developed significantly over the last decade, and it is becoming increasingly feasible to construct large, high-resolution displays. Prior work has shown a number of key performance advantages for these displays that can largely be attributed to the replacement of virtual navigation (e.g., panning and zooming) with physical navigation (e.g., moving, turning, glancing). This research moves beyond the question of performance or efficiency and examines ways in which the large, high-resolution display can support the cognitive demanding task of sensemaking. The core contribution of this work is to show that the physical properties of large, high- resolution displays create a fundamentally different environment from conventional displays, one that is inherently spatial, resulting in support for a greater range of embodied resources. To support this, we describe a series of studies that examined the process of sensemaking on one of these displays. These studies illustrate how the display becomes a cognitive partner of the the analyst, encouraging the use of the space for the externalization of the analyst's thought process or findings. We particularly highlight how the flexibility of the space sup- ports the use of incremental formalism, a process of gradually structuring information as understanding grows. Building on these observations, we have developed a new sensemaking environment called Analyst's Workspace (AW), which makes use of a large, high-resolution display as a core component of its design. The primary goal of AW is to provide an environment that unifies the activities of foraging and synthesis into a single investigative thread. AW addresses this goal through the use of an integrated spatial environment in which full text documents serve as primary sources of information, investigative tools for pursuing leads, and sensemaking artifacts that can be arranged in the space to encode information about relationships between events and entities. This work also provides a collection of design principles that fell out of the development of AW, and that we hope can guide future development of analytic tools on large, high-resolution displays. / Ph. D.
228

Fluctuation Relations for Stochastic Systems far from Equilibrium

Dorosz, Sven 28 April 2010 (has links)
Fluctuations are of great importance in systems of small length and energy scales. Measuring the pulling of single molecules or the stationary fiow of mesospheres dragged through a viscous media enables the direct analysis of work and entropy distributions. These probability distributions are the result of a large number of repetitions of the same experiment. Due to the small scale of these experiments, the outcome can vary significantly from one realization to the next. Strong theoretical predictions exist, collectively called Fluctuation Theorems, that restrict the shape of these distributions due to an underlying time reversal symmetry of the microscopic dynamics. Fluctuation Theorems are the strongest existing statements on the entropy production of systems that are out of equilibrium. Being the most important ingredient for the Fluctuation Theorems, the probability distribution of the entropy change is itself of great interest. Using numerically exact methods we characterize entropy distributions for various stochastic reaction-diffusion systems that present different properties in their underlying dynamics. We investigate these systems in their steady states and in cases where time dependent forces act on them. This study allows us to clarify the connection between the microscopic rules and the resulting entropy production. The present work also adds to the discussion of the steady state properties of stationary probabilities and discusses a non-equilibrium current amplitude that allows us to quantify the distance from equilibrium. The presented results are part of a greater endeavor to find common rules that will eventually lead to a general understanding of non-equilibrium systems. / Ph. D.
229

The Visual Scalability of Integrated and Multiple View Visualizations for Large, High Resolution Displays

Yost, Beth Ann 19 April 2007 (has links)
Geospatial intelligence analysts, epidemiologists, sociologists, and biologists are all faced with trying to understand massive datasets that require integrating spatial and multidimensional data. Information visualizations are often used to aid these scientists, but designing the visualizations is challenging. One aspect of the visualization design space is a choice of when to use a single complex integrated view and when to use multiple simple views. Because of the many tradeoffs involved with this decision it is not always clear which design to use. Additionally, as the cost of display technologies continues to decrease, large, high resolution displays are gradually becoming a more viable option for single users. These large displays offer new opportunities for scaling up visualization to very large datasets. Visualizations that are visually scalable are able to effectively display large datasets in terms of both graphical scalability (the number of pixels required) and perceptual scalability (the effectiveness of a visualization, measured in terms of user performance, as the amount of data being visualized is scaled-up). The purpose of this research was to compare information visualization designs for integrating spatial and multidimensional data in terms of their visual scalability for large, high resolution displays. Toward that goal a hierarchical design space was articulated and a series of user experiments were performed. A baseline was established by comparing user performance with opposing visualizations on a desktop monitor. Then, visualizations were compared as more information was added using the additional pixels available with a large, high resolution display. Results showed that integrated views were more visually scalable than multiple view visualizations. The visualizations tested were even scalable beyond the limits of visual acuity. User performance on certain tasks improved due to the additional information that was visualized even on a display with enough pixels to require physical navigation to visually distinguish all elements. The reasons for the benefits of integrated views on large, high resolution displays include a reduction in navigation due to spatial grouping and visual aggregation resulting in the emergence of patterns. These findings can help with the design of information visualizations for large, high resolution displays. / Ph. D.
230

Mathematical analysis of a large-gap electromagnetic suspension system

Jump, Addison B. 06 June 2008 (has links)
In a form of controlled electromagnetic suspension, a permanent magnetic is levitated by a magnetic field; the field is produced by electrical currents passing through coils. These currents are the control input. In a Large-Gap system the coils are at some distance from the suspended body; in general, there is no closed form expression relating the currents to the flux at the point of the suspended body. Thus, in the general case, it is not possible to establish control-theoretic results for this kind of Large-Gap suspension system. It is shown, however, that if the coil placement configuration exhibits a particular cylindrically symmetric structure, expressions can be found relating the coil positions to the flux. These expressions are used to show the existence of a unique equilibrium point and controllability, in five dimensions of control, for a generic form of Large-Gap system. The results are shown to remain true if the suspended body is rotated about a particular axis. Closed form expressions are found for the currents required to suspend the body at these variable orientations. An inequality between difference classes of experimental inputs is shown to be a necessary condition for suspension of the body. It is demonstrated that the addition of coils to the system cannot lead to six dimensions of controllability. Let the system be given by the standard control equation 𝑥̇=𝐴𝑥+𝐵𝑢 Closed form expressions are found for the eigenvalues of 𝐴. In the course of proving that some coil placement restrictions may be relaxed, 𝐵 is shown to be related to the Vandermonde matrix. / Ph. D.

Page generated in 0.1178 seconds