• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 7
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 28
  • 18
  • 18
  • 16
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Measurement data selection and association in a collision mitigation system / Filtrering av mätdata och association i ett kollisions varnings system

Glawing, Henrik January 2002 (has links)
Today many car manufactures are developing systems that help the driver to avoid collisions. Examples of this kind of systems are: adaptive cruise control, collision warning and collision mitigation / avoidance. All these systems need to track and predict future positions of surrounding objects (vehicles ahead of the system host vehicle), to calculate the risk of a future collision. To validate that a prediction is correct the predictions must be correlated to observations. This is called the data association problem. If a prediction can be correlated to an observation, this observation is used for updating the tracking filter. This process maintains the low uncertainty level for the track. From the work behind this thesis, it has been found that a sequential nearest- neighbour approach for the solution of the problem to correlate an observation to a prediction can be used to find the solution to the data association problem. Since the computational power for the collision mitigation system is limited, only the most dangerous surrounding objects can be tracked and predicted. Therefore, an algorithm that classifies and selects the most critical measurements is developed. The classification into order of potential risk can be done using the measurements that come from an observed object.
22

Recognizing Combustion Variability for Control of Gasoline Engine Exhaust Gas Recirculation using Information from the Ion Current

Holub, Anna, Liu, Jie January 2006 (has links)
The ion current measured from the spark plug in a spark ignited combustion engine is used as basis for analysis and control of the combustion variability caused by exhaust gas recirculation. Methods for extraction of in-cylinder pressure information from the ion current are analyzed in terms of reliability and processing efficiency. A model for the recognition of combustion variability using this information is selected and tested on both simulated and car data.
23

Measurement data selection and association in a collision mitigation system / Filtrering av mätdata och association i ett kollisions varnings system

Glawing, Henrik January 2002 (has links)
<p>Today many car manufactures are developing systems that help the driver to avoid collisions. Examples of this kind of systems are: adaptive cruise control, collision warning and collision mitigation / avoidance. </p><p>All these systems need to track and predict future positions of surrounding objects (vehicles ahead of the system host vehicle), to calculate the risk of a future collision. To validate that a prediction is correct the predictions must be correlated to observations. This is called the data association problem. If a prediction can be correlated to an observation, this observation is used for updating the tracking filter. This process maintains the low uncertainty level for the track. </p><p>From the work behind this thesis, it has been found that a sequential nearest- neighbour approach for the solution of the problem to correlate an observation to a prediction can be used to find the solution to the data association problem. </p><p>Since the computational power for the collision mitigation system is limited, only the most dangerous surrounding objects can be tracked and predicted. Therefore, an algorithm that classifies and selects the most critical measurements is developed. The classification into order of potential risk can be done using the measurements that come from an observed object.</p>
24

Aggregation of autoregressive processes and random fields with finite or infinite variance / Autoregresinių procesų ir atsitiktinių laukų su baigtine arba begaline dispersija agregavimas

Puplinskaitė, Donata 29 October 2013 (has links)
Aggregated data appears in many areas such as econimics, sociology, geography, etc. This motivates an importance of studying the (dis)aggregation problem. One of the most important reasons why the contemporaneous aggregation become an object of research is the possibility of obtaining the long memory phenomena in processes. The aggregation provides an explanation of the long-memory effect in time series and a simulation method of such series as well. Accumulation of short-memory non-ergodic random processes can lead to the long memory ergodic process, that can be used for the forecasts of the macro and micro variables. We explore the aggregation scheme of AR(1) processes and nearest-neighbour random fields with infinite variance. We provide results on the existence of limit aggregated processes, and find conditions under which it has long memory properties in certain sense. For the random fields on Z^2, we introduce the notion of (an)isotropic long memory based on the behavior of partial sums. In L_2 case, the known aggregation of independent AR(1) processes leads to the Gaussian limit. While we describe a new model of aggregation based on independent triangular arrays. This scheme gives the limit aggregated process with finite variance which is not necessary Gaussian. We study a discrete time risk insurance model with stationary claims, modeled by the aggregated heavy-tailed process. We establish the asymptotic properties of the ruin probability and the dependence structure... [to full text] / Agreguoti duomenys naudojami daugelyje mokslo sričių tokių kaip ekonomika, sociologija, geografija ir kt. Tai motyvuoja tirti (de)agregavimo uždavinį. Viena iš pagrindinių priežasčių kodėl vienalaikis agregavimas tapo tyrimų objektu yra galimybė gauti ilgos atminties procesus. Agregavimas paaiškina ilgos atminties atsiradima procesuose ir yra vienas iš būdų tokius procesus generuoti. Agreguodami trumpos atminties neergodiškus atsitiktinius procesus, galime gauti ilgos atminties ergodišką procesą, kuris gali būti naudojamas mikro ir makro kintamųjų prognozavimui. Disertacijoje nagrinėjama AR(1) procesų bei artimiausio kaimyno atsitiktinių laukų, turinčių begalinę dispersiją, agregavimo schema, randamos sąlygos, kurioms esant ribinis agreguotas procesas egzistuoja, ir turi ilgąją atmintį tam tikra prasme. Atsitiktinių laukų atveju, įvedamas anizotropinės/izotropinės ilgos atminties apibrėžimas, kuris yra paremtas dalinių sumų elgesiu. Baigtinės dispersijos atveju yra gerai žinoma nepriklausomų AR(1) procesų schema, kuri rezultate duoda Gauso ribinį agreguotą procesą. Disertacijoje aprašoma trikampio masyvo agregavimo modelis, kuris baigtinės dispersijos atveju duoda nebūtinai Gauso ribinį agreguotą procesą. Taip pat disertacijoje nagrinėjama bankroto tikimybės asimptotika, kai žalos yra aprašomos sunkiauodegiu agreguotu procesu, nusakoma priklausomybė tarp žalų, apibūdinama žalų ilga atmintis.
25

Autoregresinių procesų ir atsitiktinių laukų su baigtine arba begaline dispersija agregavimas / Aggregation of autoregressive processes and random fields with finite or infinite variance

Puplinskaitė, Donata 29 October 2013 (has links)
Agreguoti duomenys naudojami daugelyje mokslo sričių tokių kaip ekonomika, sociologija, geografija ir kt. Tai motyvuoja tirti (de)agregavimo uždavinį. Viena iš pagrindinių priežasčių kodėl vienalaikis agregavimas tapo tyrimų objektu yra galimybė gauti ilgos atminties procesus. Agregavimas paaiškina ilgos atminties atsiradima procesuose ir yra vienas iš būdų tokius procesus generuoti. Agreguodami trumpos atminties neergodiškus atsitiktinius procesus, galime gauti ilgos atminties ergodišką procesą, kuris gali būti naudojamas mikro ir makro kintamųjų prognozavimui. Disertacijoje nagrinėjama AR(1) procesų bei artimiausio kaimyno atsitiktinių laukų, turinčių begalinę dispersiją, agregavimo schema, randamos sąlygos, kurioms esant ribinis agreguotas procesas egzistuoja, ir turi ilgąją atmintį tam tikra prasme. Atsitiktinių laukų atveju, įvedamas anizotropinės/izotropinės ilgos atminties apibrėžimas, kuris yra paremtas dalinių sumų elgesiu. Baigtinės dispersijos atveju yra gerai žinoma nepriklausomų AR(1) procesų schema, kuri rezultate duoda Gauso ribinį agreguotą procesą. Disertacijoje aprašoma trikampio masyvo agregavimo modelis, kuris baigtinės dispersijos atveju duoda nebūtinai Gauso ribinį agreguotą procesą. Taip pat disertacijoje nagrinėjama bankroto tikimybės asimptotika, kai žalos yra aprašomos sunkiauodegiu agreguotu procesu, nusakoma priklausomybė tarp žalų, apibūdinama žalų ilga atmintis. / Aggregated data appears in many areas such as econimics, sociology, geography, etc. This motivates an importance of studying the (dis)aggregation problem. One of the most important reasons why the contemporaneous aggregation become an object of research is the possibility of obtaining the long memory phenomena in processes. The aggregation provides an explanation of the long-memory effect in time series and a simulation method of such series as well. Accumulation of short-memory non-ergodic random processes can lead to the long memory ergodic process, that can be used for the forecasts of the macro and micro variables. We explore the aggregation scheme of AR(1) processes and nearest-neighbour random fields with infinite variance. We provide results on the existence of limit aggregated processes, and find conditions under which it has long memory properties in certain sense. For the random fields on Z^2, we introduce the notion of (an)isotropic long memory based on the behavior of partial sums. In L_2 case, the known aggregation of independent AR(1) processes leads to the Gaussian limit. While we describe a new model of aggregation based on independent triangular arrays. This scheme gives the limit aggregated process with finite variance which is not necessary Gaussian. We study a discrete time risk insurance model with stationary claims, modeled by the aggregated heavy-tailed process. We establish the asymptotic properties of the ruin probability and the dependence structure... [to full text]
26

探討三種分類方法來提升混合方式用在兩階段決策模式的準確率:以旅遊決策為例 / Improving the precision rate of the Two-stage Decision Model in the context of tourism decision-making via exploring Decision Tree, Multi-staged Binary Tree and Back Propagation of Error Neural Network

陳怡倩, Chen, Yi Chien Unknown Date (has links)
The two-stage data mining technique for classifications in tourism recommendation system is necessary to connect user perception, decision criteria and decision purpose. In existed literature, hybrid data mining method combining Decision Tree and K-nearest neighbour approaches (DTKNN) were proposed. It has a high precision rate of approximately 80% in K-nearest Neighbour (KNN) but a much lower rate in the first stage using Decision Tree (Fu & Tu, 2011). It included two potential improvements on two-stage technique. To improve the first stage of DTKNN in precision rate and the efficiency, the amount of questions is decreased when users search for the desired recommendation on the system. In this paper, the researcher investigates the way to improve the first stage of DTKNN for full questionnaires and also determines the suitability of dynamic questionnaire based on its precision rate in future tourism recommendation system. Firstly, this study compared and chose the highest precision rate among Decision Tree, Multi-staged Binary Tree and Back Propagation of Error Neural Network (BPNN). The chosen method is then combined with KNN to propose a new methodology. Secondly, the study compared and deter¬mined the suitability of dynamic questionnaires for all three classification methods by decreasing the number of attributes. The suitable dynamic questionnaire is based on the least amount of attributes used with an appropriate precision rate. Tourism recommendation system is selected as the target to apply and analyse the usefulness of the algorithm as tourism selection is a two-stage example. Tourism selection is to determine expected goal and experience before going on a tour at the first stage and to choose the tour that best matches stage one. The result indicates that Multi-staged Bi¬nary Tree has the highest precision rate of 74.167% comparing to Decision Tree with 73.33% then BPNN with 65.47% for full questionnaire. This new approach will improve the effectiveness of the system by improving the precision rate of first stage under the current DTKNN method. For dynamic questionnaire, the result has shown that Decision Tree is the most suitable method given that it resulted in the least difference of 1.33% in precision rate comparing to full questionnaire, as opposed to 1.48% for BPNN and 4% for Multi-staged Binary Tree. Thus, dynamic questionnaire will also improve the efficiency by decreasing the amount of questions which users are required to fill in when searching for the desired recommendation on the system. It provides users with the option to not answer some questions. It also increases the practicality of non-dynamic questionnaire and, therefore, affects the ultimate precision rate.
27

Towards large-scale quantum computation

Fowler, Austin Greig Unknown Date (has links) (PDF)
This thesis deals with a series of quantum computer implementation issues from the Kane 31P in 28Si architecture to Shor’s integer factoring algorithm and beyond. The discussion begins with simulations of the adiabatic Kane CNOT and readout gates, followed by linear nearest neighbor implementations of 5-qubit quantum error correction with and without fast measurement. A linear nearest neighbor circuit implementing Shor’s algorithm is presented, then modified to remove the need for exponentially small rotation gates. Finally, a method of constructing optimal approximations of arbitrary single-qubit fault-tolerant gates is described and applied to the specific case of the remaining rotation gates required by Shor’s algorithm.
28

Optimum Polarization States & their Role in UWB Radar Identification of Targets

Faisal Aldhubaib Unknown Date (has links)
Although utilization of polarimetry techniques for recognition of military and civilian targets is well established in the narrowband context, it is not yet fully established in a broadband sense as compared to planetary area of research. The concept of combining polarimetry together with certain areas of broadband technology and thus forming a robust signature and feature set has been the main theme of this thesis. This is important, as basing the feature set on multiple types of signatures can increase the accuracy of the recognition process. In this thesis, the concept of radar target recognition based upon a polarization signature in a broadband context is examined. A proper UWB radar signal can excite the target dominant resonances and, consequently, reveal information about the target principle dimensions; while diversity in the polarization domain revealed information about the target shape. The target dimensions are used to classify the target, and then information about its shape is used to identify it. Fused together and inferred from the target characteristic polarization states, it was verified that the polarization information at dominant resonant frequencies have both a physical interpretation and attributes (as seen in section ‎3.4.3) related to the target symmetry, linearity, and orientation. In addition, this type of information has the ability to detect the presence of major scattering mechanisms such as strong specular reflection as in the case of the cylinder flat ends. Throughout the thesis, simulated canonical targets with similar resonant frequencies were used, and thus identification of radar targets was based solely on polarization information. In this framework, the resonant frequencies were merely identified as peaks in the frequency response for simple or low damping targets such as thin metal wires, or alternatively identified as the imaginary parts of the complex poles for complex or high damping targets with significant diameter and dielectric properties. Therefore, the main contribution of this thesis originates from the ability to integrate the optimum polarization states in a broadband context for improved target recognition performance. In this context, the spectral dispersion originating from the broad nature of the radar signal, the lack of accuracy in extracting the target resonances, the robustness of the polarization feature set, the representation of these states in time domain, and the feature set modelling with spatial variation are among the important issues addressed with several approaches presented to overcome them. The general approach considered involved a subset of “representative” times in the time domain, or correspondingly, “representative frequencies” in the frequency domain with which to associate optimum polarization states with each member of the subset are used. The first approach in chapter ‎3 involved the polarization representation by a set of frequency bands associated with the target resonant frequencies. This type of polarization description involved the formulation of a wideband scattering matrix to accommodate the broad nature of the signal presentation with appropriate bandwidth selection for each resonance; good estimation of the optimum polarization states in this procedure was achievable even for low signal-to-noise ratios. The second approach in chapter ‎4 extended the work of chapter ‎3 and involved the modification of the optimum polarization states by their associated powers. In addition, this approach included an identification algorithm based on the nearest neighbour technique. To identify the target, the identification algorithm involved the states at a set of resonant frequencies to give a majority vote. Then, a comparison of the performance of the modified polarization states and the original states demonstrated good improvement when the modified set is used. Generally, the accuracy of the resonance set estimate is more reliable in the time domain than the frequency domain, especially for resonances well localized in time. Therefore, the third approach in chapter ‎5 deals with the optimum states in the time domain where the extension to a wide band context was possible by the virtue of the polarization information embodied in the energy of the resonances. This procedure used a model-based signature to model the target impulse response as a set of resonances. The relevant resonance parameters, in this case, the resonant frequency and its associated energy, were extracted using the Matrix Pencil of Function algorithm. Again, this approach of sparse representation is necessary to find descriptors from the target impulse response that are time-invariant, and at the same time, can relate robustly to the target physical characteristics. A simple target such as a long wire showed that indeed polarization information contained in the target resonance energies could reflect the target physical attributes. In addition, for noise-corrupted signals and without any pulse averaging, the accuracy in estimating the optimum states was sufficiently good for signal to noise ratios above 20dB. Below this level, extraction of some members of the resonance set are not possible. In addition, using more complex wire models of aircraft, these time-based optimum states could distinguish between similar dimensional targets with small structural differences, e.g. different wing dihedral angles. The results also showed that the dominant resonance set has members belonging to different structural sections of the target. Therefore, incorporation of a time-based polarization set can give the full target physical characteristics. In the final procedure, a statistical Kernel function estimated the feature set derived previously in chapter ‎3, with aspect angle. After sampling the feature set over a wide set of angular aspects, a criterion based on the Bayesian error bisected the target global aspect into smaller sectors to decrease the variance of the estimate and, subsequently, decrease the probability of error. In doing so, discriminative features that have acceptable minimum probability of error were achievable. The minimum probability of error criterion and the angular bisection of the target could separate the feature set of two targets with similar resonances.
29

Optimalizace rozvozu a svozu infuzních roztoků / Optimization distribution and collection of infusion solutions

Kravciv, Zbyněk January 2009 (has links)
There are many distribution tasks, that vary in a number of vehicles, time windows, dividend or undivided delivery or if it is static problems or dynamic problems. In this essay I focus just on few of them. At first I put my mind to simple statistic distribution task with one vehicle. Later I extend it with time windows, when the point can be served by one car and by many cars. In the essay it will be solved the real task of distribution and delivery transportation of infusion in the hospitals. Because of the difficulty of solution I have to use the heuristic methods - Method of nearest neighbour, Savings method and Insert method. All these methods are modified by capacity requirements, time windows and also observence of the rules, which the drivers have to keep during a distribution. The aim is to minimize the distance travelled by the vehicles. And at least the company could be recommended the best solution.
30

Návrh a aplikace heuristických metod při rozvozu objednávek zákazníkům společnosti NIKOL NÁPOJE a. s. / Design and application of heuristics in distribution of ordered products to the consumers of NIKOL NÁPOJE a. s. company

Solnická, Veronika January 2010 (has links)
This thesis deals with the optimization of distribution of products to consumers based on a real case study of a particular company from Opava. For this purpose, a mathematical optimization model is used to illustrate the vehicle routing problem. The study will also offer an explanation on the relevancy of heuristic methods, mainly with respect to their application in solving real life situations analogous to the one surveyed. On the basis of chosen heuristic methods (i.e. the nearest neighbour algorithm and the savings algorithm) and having taken into account the restricting conditions of the company, four algorithms were designed. These four algorithms are programmed in Visual Basic for Applications MS Excel 2007. They are aimed at solving the real problems with the distribution of ordered products that the particular company must deal with. The thesis compares the results provided by an employee of this company, and the results presented by the designed algorithms.

Page generated in 0.0657 seconds