• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 26
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 122
  • 122
  • 98
  • 41
  • 32
  • 17
  • 16
  • 15
  • 15
  • 15
  • 14
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Extension and Generalization of Newell's Simplified Theory of Kinematic Waves

Ni, Daiheng 19 November 2004 (has links)
Flow of traffic on freeways and limited access highways can be represented as a series of kinemetic waves. Solutions to these systems of equations become problematic under congested traffic flow conditions, and under complicated (real-world) networks. A simplified theory of kinematics waves was previously proposed. Simplifying elements includes translation of the problem to moving coordinate system, adoption of bi-linear speed-density relationships, and adoption of restrictive constraints at the on- and off-ramps. However, these simplifying assumptions preclude application of this technique to most practical situations. This research explores the limitations of the simplified theory of kinematic waves. First this research documents a relaxation of several key constraints. In the original theory, priority was given to on-ramp merging vehicles so that they can bypass any queue at the merge. This research proposes to relax this constraint using a capacity-based weighted fair queuing (CBWFQ) merge model. In the original theory, downstream queue affects upstream traffic as a whole and exiting traffic can always be able to leave as long as it gets to the diverge. This research proposes that this diverge constraint be replaced with a contribution-based weighted splitting (CBWS) diverge model. This research proposes a revised notation system, permitting the solution techniques to be extended to freeway networks with multiple freeways and their ramps. This research proposes a generalization to permit application of the revised theory to general transportation networks. A generalized CBWFQ merge model and a generalized CBWS diverge model are formulated to deal with merging and diverging traffic. Finally, this research presents computational procedure for solving the new system of equations. Comparisons of model predictions with field observations are conducted on GA 400 in Atlanta. Investigations into the performance of the proposed CBWFQ and CBWS models are conducted. Results are quite encouraging, quantitative measures suggest satisfactory accuracy with narrow confidence interval.
102

An Algorithmic Approach To Some Matrix Equivalence Problems

Harikrishna, V J 01 January 2008 (has links)
The analysis of similarity of matrices over fields, as well as integral domains which are not fields, is a classical problem in Linear Algebra and has received considerable attention. A related problem is that of simultaneous similarity of matrices. Many interesting algebraic questions that arise in such problems are discussed by Shmuel Friedland[1]. A special case of this problem is that of Simultaneous Unitary Similarity of hermitian matrices, which we describe as follows: Given a collection of m ordered pairs of similar n×n hermitian matrices denoted by {(Hl,Dl)}ml=1, 1. determine if there exists a unitary matrix U such that UHl U∗ = Dl for all l, 2. and in the case where a U exists, find such a U, (where U∗is the transpose conjugate of U ).The problem is easy for m =1. The problem is challenging for m > 1.The problem stated above is the algorithmic version of the problem of classifying hermitian matrices upto unitary similarity. Any problem involving classification of matrices up to similarity is considered to be “wild”[2]. The difficulty in solving the problem of classifying matrices up to unitary similarity is a indicator of, the toughness of problems involving matrices in unitary spaces [3](pg, 44-46 ).Suppose in the statement of the problem we replace the collection {(Hl,Dl)}ml=1, by a collection of m ordered pairs of complex square matrices denoted by {(Al,Bl) ml=1, then we get the Simultaneous Unitary Similarity problem for square matrices. Suppose we consider k ordered pairs of complex rectangular m ×n matrices denoted by {(Yl,Zl)}kl=1, then the Simultaneous Unitary Equivalence problem for rectangular matrices is the problem of finding whether there exists a m×m unitary matrix U and a n×n unitary matrix V such that UYlV ∗= Zl for all l and in the case they exist find them. In this thesis we describe algorithms to solve these problems. The Simultaneous Unitary Similarity problem for square matrices is challenging for even a single pair (m = 1) if the matrices involved i,e A1,B1 are not normal. In an expository article, Shapiro[4]describes the methods available to solve this problem by arriving at a canonical form. That is A1 or B1 is used to arrive at a canonical form and the matrices are unitarily similar if and only if the other matrix also leads to the same canonical form. In this thesis, in the second chapter we propose an iterative algorithm to solve the Simultaneous Unitary Similarity problem for hermitian matrices. In each iteration we either get a step closer to “the simple case” or end up solving the problem. The simple case which we describe in detail in the first chapter corresponds to finding whether there exists a diagonal unitary matrix U such that UHlU∗= Dl for all l. Solving this case involves defining “paths” made up of non-zero entries of Hl (or Dl). We use these paths to define an equivalence relation that partitions L = {1,…n}. Using these paths we associate scalars with each Hl(i,j) and Dl(i,j)denoted by pr(Hl(i,j)) and pr(Dl(i,j)) (pr is used to indicate that these scalars are obtained by considering products of non-zero elements along the paths from i,j to their class representative). Suppose i (I Є L)belongs to the class[d(i)](d(i) Є L) we denote by uisol a modulus one scalar expressed in terms of ud(i) using the path from i to d( i). The free variable ud(i) can be chosen to be any modulus one scalar. Let U sol be a diagonal unitary matrix given by U sol = diag(u1 sol , u2 sol , unsol ). We show that a diagonal U such that U HlU∗ = Dl exists if and only if pr(Hl(i, j)) = pr(Dl(i, j))for all l, i, j and UsolHlUsol∗= Dl. Solving the simple case sets the trend for solving the general case. In the general case in an iteration we are looking for a unitary U such that U = blk −diag(U1,…, Ur) where each Ui is a pi ×p (i, j Є L = {1,… , r}) unitary matrix such that U HlU ∗= Dl. Our aim in each iteration is to get at least a step closer to the simple case. Based on pi we partition the rows and columns of Hl and Dl to obtain pi ×pj sub-matrices denoted by Flij in Hl and Glij in D1. The aim is to diagonalize either Flij∗Flij Flij∗ and a get a step closer to the simple case. If square sub-matrices are multiples of unitary and rectangular sub-matrices are zeros we say that the collection is in Non-reductive-form and in this case we cannot get a step closer to the simple case. In Non- reductive-form just as in the simple case we define a relation on L using paths made up of these non-zero (multiples of unitary) sub-matrices. We have a partition of L. Using these paths we associate with Flij and (G1ij ) matrices denoted by pr(F1ij) and pr(G1ij) respectively where pr(F1ij) and pr(G1ij) are multiples of unitary. If there exist pr(Flij) which are not multiples of identity then we diagonalize these matrices and move a step closer to the simple case and the given collection is said to be in Reduction-form. If not, the collection is in Solution-form. In Solution-form we identify a unitary matrix U sol = blk −diag(U1sol , U2 sol , …, Ur sol )where U isol is a pi ×pi unitary matrix that is expressed in terms of Ud(i) by using the path from i to[d(i)]( i Є [d(i)], d(i) Є L, Ud(i) is free). We show that there exists U such that U HlU∗ = Dl if and only if pr((Flij) = pr(G1ij) and U solHlU sol∗ = Dl. Thus in a maximum of n steps the algorithm solves the Simultaneous Unitary Similarity problem for hermitian matrices. In the second chapter we also relate the Simultaneous Unitary Similarity problem for hermitian matrices to the simultaneous closed system evolution problem for quantum states. In the third chapter we describe algorithms to solve the Unitary Similarity problem for square matrices (single ordered pair) and the Simultaneous Unitary Equivalence problem for rectangular matrices. These problems are related to the Simultaneous Unitary Similarity problem for hermitian matrices. The algorithms described in this chapter are similar in flow to the algorithm described in the second chapter. This shows that it is the fact that we are looking for unitary similarity that makes these forms possible. The hermitian (or normal)nature of the matrices is of secondary importance. Non-reductive-form is the same as in the hermitian case. The definition of the paths changes a little. But once the paths are defined and the set L is partitioned the definitions of Reduction-form and Solution-form are similar to their counterparts in the hermitian case. In the fourth chapter we analyze the worst case complexity of the proposed algorithms. The main computation in all these algorithms is that of diagonalizing normal matrices, partitioning L and calculating the products pr((Flij) = pr(G1ij). Finding the partition of L is like partitioning an undirected graph in the square case and partitioning a bi-graph in the rectangular case. Also, in this chapter we demonstrate the working of the proposed algorithms by running through the steps of the algorithms for three examples. In the fifth and the final chapter we show that finding if a given collection of ordered pairs of normal matrices is Simultaneously Similar is same as finding if the collection is Simultaneously Unitarily Similar. We also discuss why an algorithm to solve the Simultaneous Similarity problem, along the lines of the algorithms we have discussed in this thesis, may not exist. (For equations pl refer the pdf file)
103

Traffic engineering for multi-homed mobile networks.

Chung, Albert Yuen Tai, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This research is motivated by the recent developments in the Internet Engineering Task Force (IETF) to support seamless integration of moving networks deployed in vehicles to the global Internet. The effort, known as Network Mobility (NEMO), paves the way to support high-speed Internet access in mass transit systems, e.g. trains; buses; ferries; and planes; through the use of on-board mobile routers embedded in the vehicle. One of the critical research challenges of this vision is to achieve high-speed and reliable back-haul connectivity between the mobile router and the rest of the Internet. The problem is particularly challenging due to the fact that a mobile router must rely on wireless links with limited bandwidth and unpredictable quality variations as the vehicle moves around. In this thesis, the multi-homing concept is applied to approach the problem. With multi-homing, mobile router has more than one connection to the Internet. This is achieved by connecting the mobile router to a diverse array of wireless access technologies (e.g., GPRS, CDMA, 802.11, and 802.16) and/or a multiplicity of wireless service providers. While the aggregation helps addressing the bandwidth problem, quality variation problem can be mitigated by employing advanced traffic engineering techniques that dynamically control inbound and outbound traffic over multiple connections. More specifically, the thesis investigates traffic engineering solutions for mobile networks that can effectively address the performance objectives, e.g. maximizing profit for mobile network operator; guaranteeing quality of service for the users; and maintaining fair access to the back-haul bandwidth. Traffic engineering solutions with three different levels of control have been investigated. First, it is shown, using detailed computer simulation of popular applications and networking protocols(e.g., File Transfer Protocol and Transmission Control Protocol), that packet-level traffic engineering which makes decisions of which Internet connection to use for each and every packet, leads to poor system throughput. The main problem with packet-based traffic engineering stems from the fact that in mobile environment where link bandwidths and delay can vary significantly, packets using different connections may experience different delays causing unexpected arrivals at destinations. Second, a maximum utility flow-level traffic engineering has been proposed that aims to maximize a utility function that accounts for bandwidth utilization on the one hand, and fairness on the other. The proposed solution is compared against previously proposed flow-level traffic engineering schemes and shown to have better performance in terms of throughput and fairness. The third traffic engineering proposal addresses the issue of maximizing operator?s profit when different Internet connections have different charging rates, and guaranteeing per user bandwidth through admission control. Finally, a new signaling protocol is designed to allow the mobile router to control its inbound traffic.
104

Toward semantic interoperability for software systems

Lister, Kendall January 2008 (has links)
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57] / In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. / The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. / The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. / Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. / In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
105

Assessing the reliability of digital evidence from live investigations involving encryption

Hargreaves, Christopher James January 2009 (has links)
The traditional approach to a digital investigation when a computer system is encountered in a running state is to remove the power, image the machine using a write blocker and then analyse the acquired image. This has the advantage of preserving the contents of the computer’s hard disk at that point in time. However, the disadvantage of this approach is that the preservation of the disk is at the expense of volatile data such as that stored in memory, which does not remain once the power is disconnected. There are an increasing number of situations where this traditional approach of ‘pulling the plug’ is not ideal since volatile data is relevant to the investigation; one of these situations is when the machine under investigation is using encryption. If encrypted data is encountered on a live machine, a live investigation can be performed to preserve this evidence in a form that can be later analysed. However, there are a number of difficulties with using evidence obtained from live investigations that may cause the reliability of such evidence to be questioned. This research investigates whether digital evidence obtained from live investigations involving encryption can be considered to be reliable. To determine this, a means of assessing reliability is established, which involves evaluating digital evidence against a set of criteria; evidence should be authentic, accurate and complete. This research considers how traditional digital investigations satisfy these requirements and then determines the extent to which evidence from live investigations involving encryption can satisfy the same criteria. This research concludes that it is possible for live digital evidence to be considered to be reliable, but that reliability of digital evidence ultimately depends on the specific investigation and the importance of the decision being made. However, the research provides structured criteria that allow the reliability of digital evidence to be assessed, demonstrates the use of these criteria in the context of live digital investigations involving encryption, and shows the extent to which each can currently be met.
106

Genetic algorithm applied to generalized cell formation problems / Algorthmes génétiques appliqués aux problèmes de formation de cellules de production avec routages et processes alternatifs

Vin, Emmanuelle 19 March 2010 (has links)
The objective of the cellular manufacturing is to simplify the management of the<p>manufacturing industries. In regrouping the production of different parts into clusters,<p>the management of the manufacturing is reduced to manage different small<p>entities. One of the most important problems in the cellular manufacturing is the<p>design of these entities called cells. These cells represent a cluster of machines that<p>can be dedicated to the production of one or several parts. The ideal design of a<p>cellular manufacturing is to make these cells totally independent from one another,<p>i.e. that each part is dedicated to only one cell (i.e. if it can be achieved completely<p>inside this cell). The reality is a little more complex. Once the cells are created,<p>there exists still some traffic between them. This traffic corresponds to a transfer of<p>a part between two machines belonging to different cells. The final objective is to<p>reduce this traffic between the cells (called inter-cellular traffic).<p>Different methods exist to produce these cells and dedicated them to parts. To<p>create independent cells, the choice can be done between different ways to produce<p>each part. Two interdependent problems must be solved:<p>• the allocation of each operation on a machine: each part is defined by one or<p>several sequences of operations and each of them can be achieved by a set of<p>machines. A final sequence of machines must be chosen to produce each part.<p>• the grouping of each machine in cells producing traffic inside and outside the<p>cells.<p>In function of the solution to the first problem, different clusters will be created to<p>minimise the inter-cellular traffic.<p>In this thesis, an original method based on the grouping genetic algorithm (Gga)<p>is proposed to solve simultaneously these two interdependent problems. The efficiency<p>of the method is highlighted compared to the methods based on two integrated algorithms<p>or heuristics. Indeed, to form these cells of machines with the allocation<p>of operations on the machines, the used methods permitting to solve large scale<p>problems are generally composed by two nested algorithms. The main one calls the<p>secondary one to complete the first part of the solution. The application domain goes<p>beyond the manufacturing industry and can for example be applied to the design of<p>the electronic systems as explained in the future research.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
107

Assessing the Reliability of Digital Evidence from Live Investigations Involving Encryption

Hargreaves, C J 24 November 2009 (has links)
The traditional approach to a digital investigation when a computer system is encountered in a running state is to remove the power, image the machine using a write blocker and then analyse the acquired image. This has the advantage of preserving the contents of the computer’s hard disk at that point in time. However, the disadvantage of this approach is that the preservation of the disk is at the expense of volatile data such as that stored in memory, which does not remain once the power is disconnected. There are an increasing number of situations where this traditional approach of ‘pulling the plug’ is not ideal since volatile data is relevant to the investigation; one of these situations is when the machine under investigation is using encryption. If encrypted data is encountered on a live machine, a live investigation can be performed to preserve this evidence in a form that can be later analysed. However, there are a number of difficulties with using evidence obtained from live investigations that may cause the reliability of such evidence to be questioned. This research investigates whether digital evidence obtained from live investigations involving encryption can be considered to be reliable. To determine this, a means of assessing reliability is established, which involves evaluating digital evidence against a set of criteria; evidence should be authentic, accurate and complete. This research considers how traditional digital investigations satisfy these requirements and then determines the extent to which evidence from live investigations involving encryption can satisfy the same criteria. This research concludes that it is possible for live digital evidence to be considered to be reliable, but that reliability of digital evidence ultimately depends on the specific investigation and the importance of the decision being made. However, the research provides structured criteria that allow the reliability of digital evidence to be assessed, demonstrates the use of these criteria in the context of live digital investigations involving encryption, and shows the extent to which each can currently be met.
108

A framework for processing correlated probabilistic data

van Schaik, Sebastiaan Johannes January 2014 (has links)
The amount of digitally-born data has surged in recent years. In many scenarios, this data is inherently uncertain (or: probabilistic), such as data originating from sensor networks, image and voice recognition, location detection, and automated web data extraction. Probabilistic data requires novel and different approaches to data mining and analysis, which explicitly account for the uncertainty and the correlations therein. This thesis introduces ENFrame: a framework for processing and mining correlated probabilistic data. Using this framework, it is possible to express both traditional and novel algorithms for data analysis in a special user language, without having to explicitly address the uncertainty of the data on which the algorithms operate. The framework will subsequently execute the algorithm on the probabilistic input, and perform exact or approximate parallel probability computation. During the probability computation, correlations and provenance are succinctly encoded using probabilistic events. This thesis contains novel contributions in several directions. An expressive user language – a subset of Python – is introduced, which allows a programmer to implement algorithms for probabilistic data without requiring knowledge of the underlying probabilistic model. Furthermore, an event language is presented, which is used for the probabilistic interpretation of the user program. The event language can succinctly encode arbitrary correlations using events, which are the probabilistic counterparts of deterministic user program variables. These highly interconnected events are stored in an event network, a probabilistic interpretation of the original user program. Multiple techniques for exact and approximate probability computation (with error guarantees) of such event networks are presented, as well as techniques for parallel computation. Adaptations of multiple existing data mining algorithms are shown to work in the framework, and are subsequently subjected to an extensive experimental evaluation. Additionally, a use-case is presented in which a probabilistic adaptation of a clustering algorithm is used to predict faults in energy distribution networks. Lastly, this thesis presents techniques for integrating a number of different probabilistic data formalisms for use in this framework and in other applications.
109

Towards a high performance parallel library to compute fluid flexible structures interactions

Nagar, Prateek 08 April 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / LBM-IB method is useful and popular simulation technique that is adopted ubiquitously to solve Fluid-Structure interaction problems in computational fluid dynamics. These problems are known for utilizing computing resources intensively while solving mathematical equations involved in simulations. Problems involving such interactions are omnipresent, therefore, it is eminent that a faster and accurate algorithm exists for solving these equations, to reproduce a real-life model of such complex analytical problems in a shorter time period. LBM-IB being inherently parallel, proves to be an ideal candidate for developing a parallel software. This research focuses on developing a parallel software library, LBM-IB based on the algorithm proposed by [1] which is first of its kind that utilizes the high performance computing abilities of supercomputers procurable today. An initial sequential version of LBM-IB is developed that is used as a benchmark for correctness and performance evaluation of shared memory parallel versions. Two shared memory parallel versions of LBM-IB have been developed using OpenMP and Pthread library respectively. The OpenMP version is able to scale well enough, as good as 83% speedup on multicore machines for <=8 cores. Based on the profiling and instrumentation done on this version, to improve the data-locality and increase the degree of parallelism, Pthread based data centric version is developed which is able to outperform the OpenMP version by 53% on manycore machines. A distributed version using the MPI interfaces on top of the cube based Pthread version has also been designed to be used by extreme scale distributed memory manycore systems.
110

<strong>DEVELOPMENT OF A BATTERY MONITORING SYSTEM FOR DATA-DRIVEN AI  DETECTION OF ACCELERATED LITHIUM-ION DEGRADATION</strong> Untitled Item

Alexey Y Serov (16385037) 16 June 2023 (has links)
<p>  </p> <p>Many machine learning models exist for battery management systems to utilize. Few have been shown to work. This work focuses on gathering data from cycling battery packs and sending this data directly to machine learning models built off robust datasets for applying the resulting predicted values and outputs directly on top of real-time systems. A parasitic sensor network was created composed of a main microcontroller, a host CPU, and various sensors including resistance temperature detection devices (RTDs), a voltage measurement circuit, current measurement circuit, and an accelerometer/gyroscope. The resulting network was integrated parasitically with a 4-cell 18650 SONY VTC6 battery pack, then tested both on-ground and in-flight with a commercial quadcopter. Real-time data for the battery pack with four cells in series was gathered. This real-time data stream was then integrated with data-driven neural network algorithms trained on various 18650 datasets and a real physical model to finalize the “AI BMS”. Using the power of non-linear models to infer battery health impacts not normally considered in battery management systems, the “AI BMS” was able to use low-fidelity real-time data in conjunction with a powerful multi-faceted model to make predictive decisions about battery health characteristics on top of normal system operations.</p>

Page generated in 0.2565 seconds