• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
761

Energy methods for lossless systems using quadratic differential forms

Rao, Shodhan January 2008 (has links)
In this thesis, we study the properties of lossless systems using the concept of quadratic differential forms (QDFs). Based on observation of physical linear lossless systems, we define a lossless system as one for which there exists a QDF known as an energy function that is positive along nonzero trajectories of the system and whose derivative along the trajectories of the system is zero if inputs to the system are made equal to zero. Using this deffnition, we prove that if a lossless system is autonomous, then it is oscillatory. We also give an algorithm whose output is a two-variable polynomial that induces an energy function of a lossless system and we describe a suitable way of splitting a given energy function into its potential and kinetic energy components. We further study the space of QDFs for an autonomous linear lossless system, and note that this space can be decomposed into the spaces of conserved and zero-mean quantities. We then show that there is a link between zero-mean quantities and generalized Lagrangians of an autonomous linear lossless system. Finally, we study various methods of synthesis of lossless electric networks like Cauer and Foster methods, and come up with an abstract deffnition of synthesis of a positive QDF that represents the total energy of the network to be synthesized. We show that Cauer and Foster method of synthesis can be cast in the framework of our deffnition. We show that our deffnition has applications in stability tests for linear systems, and we also give a new Routh-Hurwitz like stability test.
762

Impedance spectroscopy for cellular and biomolecular analysis

Malleo, Daniele January 2009 (has links)
The application of microfabrication technology to molecular and cell biology has motivated the rapid development of a novel class of microdevices collectively known as ‘Lab On a Chip’ devices. Impedance spectroscopy is a non-invasive, label-free, analytical technique that is amenable to miniaturization, electronic integration, automation, and scalability within ‘LOC’ devices. This thesis describes a microfabricated device for performing continuous impedance analysis of individual cells held in an array of hydrodynamic traps. This device enables continuous long-term analysis of cells and time-dependent measurement of changes elicited by cytotoxic agents and drug compounds. Finite element models are employed to predict the response to changes in the captured single cells (cell position in the trap, cell size, membrane conductivity). The system is used to assay the response of HeLa cells to the effects of the surfactant Tween 20 and Streptolysin-O, a bacterial pore-forming toxin. Novel electrode materials that reduce the parasitic effect of electrode polarisation are described and characterised. These are iridium oxide and PPy/PSS (polypyrrole/poly(styrenesulphonate)). Impedance data is analysed in terms of equivalent circuit models. The findings presented suggest that iridium oxide and PPy/PSS could be used as alternative materials to platinum black and plain platinum. PPy/PSS electrodes offer the highest electrode/electrolyte interface area and least variation with time. Finally, a silicon-based capacitive sensor with nanometric plate separation (nanogap capacitor), is characterized by impedance spectroscopy, and used to explore the behaviour of double layers constrained in cavities of dimensions comparable to the Debye length.
763

Self-organising an indoor location system using a paintable amorphous computer

Revill, John David January 2007 (has links)
This thesis investigates new methods for self-organising a precisely defined pattern of intertwined number sequences which may be used in the rapid deployment of a passive indoor positioning system's infrastructure. A future hypothetical scenario is used where computing particles are suspended in paint and covered over a ceiling. A spatial pattern is then formed over the covered ceiling. Any small portion of the spatial pattern may be decoded, by a simple camera equipped device, to provide a unique location to support location-aware pervasive computing applications. Such a pattern is established from the interactions of many thousands of locally connected computing particles that are disseminated randomly and densely over a surface, such as a ceiling. Each particle has initially no knowledge of its location or network topology and shares no synchronous clock or memory with any other particle. The challenge addressed within this thesis is how such a network of computing particles that begin in such an initial state of disarray and ignorance can, without outside intervention or expensive equipment, collaborate to create a relative coordinate system. It shows how the coordinate system can be created to be coherent, even in the face of obstacles, and closely represent the actual shape of the networked surface itself. The precision errors incurred during the propagation of the coordinate system are identified and the distributed algorithms used to avoid this error are explained and demonstrated through simulation. A new perimeter detection algorithm is proposed that discovers network edges and other obstacles without the use of any existing location knowledge. A new distributed localisation algorithm is demonstrated to propagate a relative coordinate system throughout the network and remain free of the error introduced by the network perimeter that is normally seen in non-convex networks. This localisation algorithm operates without prior configuration or calibration, allowing the coordinate system to be deployed without expert manual intervention or on networks that are otherwise inaccessible. The painted ceiling's spatial pattern, when based on the proposed localisation algorithm, is discussed in the context of an indoor positioning system.
764

Propulsion of polymer particles on caesium ion-exchanged channel waveguides for stem cell sorting applications

Mohamad Shahimin, Mukhzeer January 2009 (has links)
Optical trapping of particles has become a powerful non-mechanical and nondestructive technique for precise particle positioning. The manipulation of particles in the evanescent field of a channel waveguide potentially allows for sorting and trapping of several particles and cells simultaneously. In the evanescent field above an optical channel waveguide, particles experience three optical forces; i) a transverse gradient force, which acts in the direction of the intensity gradient, ii) a scattering force, which acts in the direction of the wave propagation and is proportional to the surface intensity and, iii) an absorption force, which is dependent upon the complex refractive index of the particle. A particle in the evanescent field will be propelled and trapped with a dependence on the intensity gradient (a property dependent upon the physical characteristic of the waveguide). A channel waveguide producing such an evanescent field can be photolithographically defined on a glass substrate and thus has the potential to be integrated into a single chip device. This thesis describes the studies carried out, both theoretically and experimentally, to establish optimum waveguide fabrication conditions and the experimental requirements to ultimately allow for the separation of polymer particles and mammalian cells according to their size and refractive index. Theoretical aspects of the interaction of particles and cells on surfaces were evaluated and their Brownian motion was investigated. Optical channel waveguides of different characteristics have been fabricated using caesium ion-exchange process on soda-lime substrates. The propulsion of polymer particles has been achieved and characterised against different optical parameters, waveguide conditions and particle’s characteristics on different surfaces. The propulsion of lymphoblastoma cells was demonstrated and the trapping of teratocarcinoma cells was evaluated. These results provide evidence for the potential application of the system for trapping and sorting stem cells.
765

Robust stability and performance for multiple model switched adaptive control

Buchstaller, Dominic January 2010 (has links)
While the concept of switching between multiple controllers to achieve a control objective is not new, the available analysis to date imposes various structural and analytical assumptions on the controlled plant. The analysis presented in this thesis, which is concerned with an Estimation-based Multiple Model Switched Adaptive Control (EMMSAC) algorithm originating from Fisher-Jeffes (2003), Vinnicombe (2004), is shown not to have such limitations. As the name suggests, the key difference between EMMSAC and common multiple model type switching schemes is that the switching decision is based on the outcome of an optimal estimation process. The use of such optimal estimators is the key that allows for a simplified, axiomatic approach to analysis. Also, since estimators may be implemented by standard optimisation techniques, their construction is feasible for a broad class of systems. The presented analysis is the first of its kind to provide comprehensive robustness and performance guarantees for a multiple model control algorithm, in terms of $l_p,\ 1\le p\le \infty$ bounds on the closed loop gain, and is applicable to the class of minimal MIMO LTI plants. A key feature of this bound is that it permits the on-line alteration of the plant model set (dynamic EMMSAC) in contrast to the usual assumption that the plant model set is constant (static EMMSAC). It is shown that a static EMMSAC algorithm is conservative whereas a dynamic EMMSAC algorithm, based on the technique of dynamically expanding the plant model set, can be universal. It is also shown that the established gain bounds are invariant to a refinement of the plant model set, e.g. as a successive increasing fidelity sampling of a continuum of plants. Dynamic refinement of the plant model set is considered with the view to increase expected performance. Furthermore, the established bounds --- which are also a measure of performance --- have the property that they are explicit in the free variables of the algorithm. It is shown that this property of the bound forms the basis for a principled, performance-orientated approach to design. Explicit, performance-orientated design examples are given and the trade off between dynamic and static constructions of plant model sets are investigated with respect to prior information on the acting disturbances and the uncertainty.
766

Providing concurrent implementations for Event-B developments

Edmunds, Andrew January 2010 (has links)
The Event-B method is a formal approach to modelling systems which incorporates the notion of refinement. This work bridges the abstraction gap between the lowest level of Event-B refinement and a working implementation. We focus on the link between Event-B and concurrent, object-oriented implementations and introduce an intermediate, object-oriented style specification notation called Object-oriented Concurrent-B (OCB). The OCB level of abstraction hides implementation details of locking and blocking, and provides the developer with a clear view of atomicity using labelled atomic clauses. OCB non-atomic clauses are given Event-B semantics, and OCB atomic clauses map to atomic events. Automatic translation of an OCB specification gives rise to an Event-B model and Java source code. The Java program will have atomicity that corresponds to the formal model (and therefore OCB clauses), and structure that is derived from the OCB model. We introduce process and monitor classes. Process classes allow specification of interleaving behaviour using non-atomic constructs, where atomic regions are defined by labelled atomic clauses. Monitor classes may be shared between the processes and provide mutually exclusive access to the shared data using atomic procedure calls. Labelled atomic clauses map to events guarded by a program counter derived from the label. This allows us to model the ordered execution of the implementation. The approach can be applied to object-oriented systems in general, but we choose Java as a target for working programs. Java's built-in synchronisation mechanism is used to provide mutually exclusive access to data. We discuss some problems related to Java programming, with regard to locking and concurrency, and their effect on OCB. The OCB syntax and mappings to Event-B and Java are defined, details of tool support and case studies follow. An extension to OCB is described in which a number of objects can be updated within a single atomic clause; facilitated by Java SDK 5.0 features. The extension allows direct access to variables of a monitor using dot notation, and multiple procedure calls in a clause. We also introduce new features to atomic actions such as a sequential operator, and atomic branching and looping.
767

One-pass algorithms for large and shifting data sets

Farran, Bassam January 2010 (has links)
For many problem domains, practitioners are faced with the problem of ever-increasing amounts of data. Examples include the UniProt database of proteins which now contains ~6 million sequences, and the KDD ’99 data which consists of ~5 million points. At these scales, the state-of-the-art machine learning techniques are not applicable since the multiple passes they require through the data are prohibitively expensive, and a need for different approaches arises. Another issue arising in real-world tasks, which is only recently becoming a topic of interest in the machine learning community, is distribution shift, which occurs naturally in many problem domains such as intrusion detection and EEG signal mapping in the Brain-Computer Interface domain. This means that the i.i.d. assumption between the training and test data does not hold, causing classifiers to perform poorly on the unseen test set. We first present a novel, hierarchical, one-pass clustering technique that is capable of handling very large data. Our experiments show that the quality of the clusters generated by our method does not degrade, while making vast computational savings compared to algorithms that require multiple passes through the data. We then propose Voted Spheres, a novel, non-linear, one-pass, multi-class classification technique capable of handling millions of points in minutes. Our empirical study shows that it achieves state-of-the-art performance on real world data sets, in a fraction of the time required by other methods. We then adapt the VS to deal with covariate shift between the training and test phases using two different techniques: an importance weighting scheme and kernel mean matching. Our results on a toy problem and the real-world KDD ’99 data show an increase in performance to our VS framework. Our final contribution involves applying the one-pass VS algorithm, along with the adapted counterpart (for covariate shift), to the Brain-Computer Interface domain, in which linear batch algorithms are generally used. Our VS-based methods outperform the SVM, and perform very competitively with the submissions of a recent BCI competition, which further shows the robustness of our proposed techniques to different problem domains.
768

How micro-evolution can guide macro-evolution : multi-scale search via evolved modular variation

Mills, Rob January 2010 (has links)
A divide-and-conquer approach to problem solving can in principle be far more efficient than tackling a problem as a monolithic whole. This type of approach is most appropriate when problems have the type of modular organisation known as near-decomposability, as implicit in many natural and engineered systems. Existing methods create higher scale composite units from non-random combinations of lower-scale units that reflect sub-problem optima. The use of composite units affords search at a higher scale that, when applied recursively, can ultimately lead to optimal top-level solutions. But for this approach to be efficient, we must decompose a problem in a manner that respects its intrinsic modular structure, information which is in general unavailable a priori. Thus, identifying and subsequently exploiting the structure recursively is vital in providing fully automatic problem decomposition. In this thesis, we define a family of algorithms that probabilistically adapt the scale of decomposition they use to reflect the structure in a problem. By doing so, they can provide optimisation that is provably superior to any single scale of search in nearly decomposable problems. Our proposed framework couples two adaptive processes: a rapid, fine-scale search that guides a slower adaptation of the decomposition. This results in a scaling up of the units used in the rapid search, now operating at a macro-scale. We find that separating the timescales for the fine-scale search and the adaptation of the decomposition is crucial for this kind of scalable optimisation. Using a simple and general class of problems that have no systematic structure, we demonstrate how our approach can nevertheless exploit the incidental structure present. Furthermore, we use idealised cases that have simple modular structure to demonstrate how our method scales as Θ(N log N) (where N is the problem size), despite the fact that single-scale search methods scale as Ω (2 √N) – and support this distinction analytically. Although our approach is algorithmically superior to single-scale search, the underlying principles that it is constructed from are simple and can operate using only localised feedback. We discuss intriguing parallels between our approach and the significance of associative evolution for ecosystem adaptation. Our results suggest that macro-evolutionary processes might not be merely extended micro-evolution, but that the action of evolutionary processes upon several scales is fundamentally different from the conventional view of (micro-)evolution at a single scale.
769

A General Work for the Flow Analysis of Concurrent Programs

Lam, Patrick 08 1900 (has links)
Standard techniques for analysing sequential programs are severely constrained when applied to a concurrent program because they cannot take full advantage of the concurrent structure of the program. In this work, we overcome this limitation using a novel approach which ``lifts'' a sequential dataflow analysis to a concurrent analysis. First, we introduce concurrency primitives which abstract away from the details of how concurrency features are implemented in real programming languages. Using these primitives, we describe how sequential analyses can be made applicable to concurrent programs. Under some circumstances, there is no penalty for concurrency: our method produces results which are as precise as the sequential analysis. Our lifting is straightforward, and we illustrate it on some standard analyses -- available expressions, live variables and generalized constant propagation. Finally, we describe how concurrency features of real languages can be expressed using our abstract concurrency primitives, and present analyses for finding our concurrency primitives in real programs.
770

Modelos Estatísticos Espaciais no Planejamento da Prestação de Serviços

Pacheco, Juliano Anderson 02 1900 (has links)
This thesis has the objective of systematizing the use of spatial statistical models for use in the planning of services, through the use of computational tools that allow the construction of those models. From a bibliographical revision the criteria used in the planning of the services was presented, pertinent concepts to the statistical analysis of spatial data and the interaction among those topics. Too explorer the creation of a spatial database pertinent to the analysis are presented and the computational tools that make possible the application of the techniques developed in this thesis, with emphasis in the R Language and its packages of spatial analysis. The following tools are presented: GeoDa, TerraView and TerraCrime.In the sequence, the applicable analysis techniques to spatial phenomena are explored modeled by points and for areas, denominated, respectively point process and areas process. Exploratory or descriptive techniques are presented to verify the forms of visualization of the data, the distribution type or the existence of groupings. Also, inferential techniques are presented to model and to quantify spatial dependence, to estimate surfaces and the estimation of models of spatial regression. Each technique is exemplified through the computational tools presented.To exemplify, a practical application of techniques of spatial statistical analysis is presented with data from public safety, where factors are identified associated with crime rates through the use of spatial regression.To conclude the work, the requirements that should be taken into consideration in the construction of a computational environmental that allow for the analysis of spatial data are presented. Also, each spatial statistical method explored in the planning of services was analyzed, as well as the results of the application in the Public Safety area.

Page generated in 0.0457 seconds