41 |
Interval Based Parameter Identification for System Biology / Intervallbaserad parameteridentifiering för systembiologiAlami, Mohsen January 2012 (has links)
This master thesis studies the problem of parameter identification for system biology. Two methods have been studied. The method of interval analysis uses subpaving as a class of objects to manipulate and store inner and outer approximations of compact sets. This method works well with the model given as a system of differential equations, but has its limitations, since the analytical expression for the solution to the ODE is not always obtainable, which is needed for constructing the inclusion function. The other method, studied, is SDP-relaxation of a nonlinear and non-convex feasibility problem. This method, implemented in the toolbox bio.SDP, works with system of difference equations, obtained using the Euler discretization method. The discretization method is not exact, raising the need of bounding this discretization error. Several methods for bounding this error has been studied. The method of ∞-norm optimization, also called worst-case-∞-norm is applied on the one-step error estimation method. The methods have been illustrated solving two system biological problems and the resulting SCP have been compared. / Det här examensarbetet studerar problemet med parameteridentifiering för systembiologi. Två metoder har studerats. Metoden med intervallanalys använder union av intervallvektorer som klass av objekt för att manipulera och bilda inre och yttre approximationer av kompakta mängder. Denna metod fungerar väl för modeller givna som ett system av differentialekvationer, men har sina begränsningar, eftersom det analytiska uttrycket för lösningen till differentialekvationen som är nödvändigt att känna till för att kunna formulera inkluderande funktioner, inte alltid är tillgängliga. Den andra studerade metoden, använder SDP-relaxering, som ett sätt att komma runt problemet med olinjäritet och icke-konvexitet i systemet. Denna metod, implementerad i toolboxen bio.SDP, utgår från system av differensekvationer, framtagna via Eulers diskretiserings metod. Diskretiseringsmetoden innehåller fel och osäkerhet, vilket gör det nödvändigt att estimera en gräns för felets storlek. Några felestimeringsmetoder har studerats. Metoden med ∞-norm optimering, också kallat worst-case-∞-norm är tillämpat på ett-stegs felestimerings metoder. Metoderna har illustrerats genom att lösa två system biologiska problem och de accepterade parametermängderna, benämnt SCP, har jämförts och diskuterats.
|
42 |
Interval methods for global optimizationMoa, Belaid 22 August 2007 (has links)
We propose interval arithmetic and interval constraint algorithms for global optimization. Both of these compute lower and upper bounds of a function over a box,
and return a lower and an upper bound for the global minimum. In interval arithmetic methods, the bounds are computed using interval arithmetic evaluations. Interval constraint methods instead use domain reduction operators and consistency algorithms.
The usual interval arithmetic algorithms
for global optimization suffer from at least one of the following drawbacks:
- Mixing the fathoming problem, in which we ask for the global minimum only, with the localization problem, in which we ask for the set of points at which the global minimum occurs.
- Not handling the inner and outer approximations for epsilon-minimizer,
which is the set of points at which
the objective function is within epsilon
of the global minimum.
- Nothing is said about the quality for their results in actual computation. The properties of the algorithms are stated only in the limit for infinite running time, infinite memory, and infinite precision of the floating-point number system.
To handle these drawbacks, we propose interval arithmetic algorithms for fathoming problems and for localization problems. For these algorithms we state properties that can be verified in actual executions of the algorithms. Moreover, the algorithms proposed return the best results
that can be computed with given expressions
for the objective function and the conditions, and a given hardware.
Interval constraint methods combine interval arithmetic and constraint processing techniques, namely consistency algorithms, to obtain tighter bounds for the objective function over a box.
The basic building block of interval constraint methods is the generic propagation algorithm. This explains our efforts to improve the generic propagation algorithm as much as possible. All our algorithms, namely dual, clustered,
deterministic, and selective propagation algorithms, are developed as an attempt to improve the efficiency of the generic propagation algorithm.
The relational box-consistency algorithm is
another key algorithm in interval constraints. This algorithm keeps squashing the left and right bounds of the intervals of the variables until no further narrowing is possible. A drawback of this way of squashing is that as we proceed further, the process of squashing becomes slow.
Another drawback is that, for some cases, the actual narrowing occurs late.
To address these problems, we propose the following algorithms:
- Dynamic Box-Consistency algorithm: instead of pruning the left and then the right bound of each domain, we alternate the pruning between all the domains.
- Adaptive Box-Consistency algorithm: the idea behind this algorithm is to get rid of the boxes as soon as possible: start with small boxes and extend them or shrink them depending on the pruning outcome. This adaptive behavior makes this algorithm very suitable for quick squashing.
Since the efficiency of interval constraint optimization methods depends heavily on the sharpness of the upper bound for the global minimum, we must make some effort to find the appropriate point or box to use for computing the upper bound, and not to randomly pick one as is commonly done.
So, we introduce interval constraints with exploration. These methods use non-interval methods as an exploratory step in
solving a global optimization problem.
The results of the exploration are then used to guide interval constraint algorithms, and thus improve their efficiency.
|
43 |
Katowický problém / On the Katowice ProblemChodounský, David January 2011 (has links)
No description available.
|
44 |
Automatic Generation of Collision Hulls for Polygonal Objects / Automatisk Generering av Kollisionsskal för polygon objektBackenhof, Albert January 2011 (has links)
Physics in interactive environments, such as computer games, and simulations require well madeand accurate bounding volumes in order to act both realistically and fast. Today it is common to useeither inaccurate boxes or spheres as bounding volumes or to model the volume by hand. Thesemethods are either too inaccurate or require too much time to ever be able to be used in real-time,accurate virtual environments.This thesis presents a method to automatically generate collision hulls for both manifolds and nonmanifolds.This allows meshes to be used in a physical environment in just a few seconds and stillbeing able to behave realistically. The method performs Approximate Convex Decomposition byiteratively dividing the mesh into smaller, more convex parts. Every part is wrapped in a convexhull. Together the hulls make an accurate, but low cost, convex representation of the original mesh.The convex hulls are stored in a bounding volume hierarchy tree structure that enables fast testingfor collision with the mesh.
|
45 |
Accurate and efficient localisation in wireless sensor networks using a best-reference selectionAbu-Mahfouz, Adnan Mohammed 12 October 2011 (has links)
Many wireless sensor network (WSN) applications depend on knowing the position of nodes within the network if they are to function efficiently. Location information is used, for example, in item tracking, routing protocols and controlling node density. Configuring each node with its position manually is cumbersome, and not feasible in networks with mobile nodes or dynamic topologies. WSNs, therefore, rely on localisation algorithms for the sensor nodes to determine their own physical location. The basis of several localisation algorithms is the theory that the higher the number of reference nodes (called “references”) used, the greater the accuracy of the estimated position. However, this approach makes computation more complex and increases the likelihood that the location estimation may be inaccurate. Such inaccuracy in estimation could be due to including data from nodes with a large measurement error, or from nodes that intentionally aim to undermine the localisation process. This approach also has limited success in networks with sparse references, or where data cannot always be collected from many references (due for example to communication obstructions or bandwidth limitations). These situations require a method for achieving reliable and accurate localisation using a limited number of references. Designing a localisation algorithm that could estimate node position with high accuracy using a low number of references is not a trivial problem. As the number of references decreases, more statistical weight is attached to each reference’s location estimate. The overall localisation accuracy therefore greatly depends on the robustness of the selection method that is used to eliminate inaccurate references. Various localisation algorithms and their performance in WSNs were studied. Information-fusion theory was also investigated and a new technique, rooted in information-fusion theory, was proposed for defining the best criteria for the selection of references. The researcher chose selection criteria to identify only those references that would increase the overall localisation accuracy. Using these criteria also minimises the number of iterations needed to refine the accuracy of the estimated position. This reduces bandwidth requirements and the time required for a position estimation after any topology change (or even after initial network deployment). The resultant algorithm achieved two main goals simultaneously: accurate location discovery and information fusion. Moreover, the algorithm fulfils several secondary design objectives: self-organising nature, simplicity, robustness, localised processing and security. The proposed method was implemented and evaluated using a commercial network simulator. This evaluation of the proposed algorithm’s performance demonstrated that it is superior to other localisation algorithms evaluated; using fewer references, the algorithm performed better in terms of accuracy, robustness, security and energy efficiency. These results confirm that the proposed selection method and associated localisation algorithm allow for reliable and accurate location information to be gathered using a minimum number of references. This decreases the computational burden of gathering and analysing location data from the high number of references previously believed to be necessary. / Thesis (PhD(Eng))--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
|
46 |
Chaînes de Markov Incomplètement spécifiées : analyse par comparaison stochastique et application à l'évaluation de performance des réseaux / Markov chains Incompletely Specified : Stochastic comparison analysis and application to networks performance evaluationAit Salaht, Farah 03 October 2014 (has links)
Dans cette thèse, nous étudions les problèmes d'incertitudes dans les modèles probabilistes et tentons de déterminer leur impact sur l'analyse de performances et le dimensionnement des systèmes. Nous considérons deux aspects du problème d'imprécision. Le premier, consiste à étudier des chaînes en temps discret dont les probabilités ou taux de transition ne sont pas parfaitement connus. Nous construisons de nouveaux algorithmes de calcul de bornes par éléments sur les vecteurs de distribution stationnaires de chaînes partiellement spécifiées. Ces algorithmes permettent de déterminer des bornes par élément à chaque étape de calcul. Le second aspect étudié concerne le problème de mesures de traces de trafic réelles dans les réseaux. Souvent très volumineuses, la modélisation des traces de trafic est généralement impossible à effectuer de façon suffisamment précise et l'adéquation avec une loi de probabilité connue n'est pas assez réaliste. Utilisant une description par histogramme du trafic, nous proposons d'appliquer une nouvelle méthode d’évaluation de performance des réseaux. Fondée sur la comparaison stochastique pour construire des bornes optimales de supports réduits des histogrammes de trafics et sur la notion de monotonie stochastique des éléments de réseau, cette méthode permet de définir, de manière très pertinente, des garanties sur les mesures de performance. Nous obtenons en effet des bornes stochastiques supérieures et inférieures sur la longueur du tampon, les pertes, etc. L'intérêt et l'impact de notre méthode sont présentés sur diverses applications : éléments de réseau, AQM, réseaux de files d'attente, file avec processus d'arrivée non-stationnaire, etc / This thesis is devoted to the uncertainty in probabilistic models, how it impacts their analysis and how to apply these methods to performance analysis and network dimensioning. We consider two aspects of the uncertainty. The first consists to study a partially specified Markov chains. The missing of some transitions in the exact system because of its complexity can be solved by constructing bounding systems where worst-case transitions are defined to obtain an upper or a lower bound on the performance measures. We propose to develop new algorithms which give element-wise bounds of the steady-state distribution for the partially specified Markov chain. These algorithms are faster than the existing ones and allow us to compute element-wise bounds at each iteration.The second aspect studied concerns the problem of the measurements of real traffic trace in networks. Exact analysis of queueing networks under real traffic becomes quickly intractable due to the state explosion. Assuming the stationarity of flows, we propose to apply the stochastic comparison method to derive performance measure bounds under histogram-based traffics. We apply an algorithm based on dynamic programming to derive optimal bounding traffic histograms on reduced state spaces. Using the stochastic bound histograms and the monotonicity of the networking elements, we show how we can obtain, in a very efficient manner, guarantees on performance measures. We indeed obtain stochastic upper and lower bounds on buffer occupancy, losses, etc. The interest and the impact of our method are shown on various applications: elements of networks, AQM, queueing networks and queue with non-stationary arrival process
|
47 |
Improving Sinkhole Mapping Using LiDAR Data and Assessing Road Infrastructure at Risk in Johnson City, TN.Fasesin, Kingsley, Luffman, Ingrid, Ernenwein, Eileen, Nandi, Arpita 05 April 2018 (has links)
Improving Sinkhole Mapping Using LiDAR Data and Assessing Road Infrastructure at Risk in Johnson City, TN.
Kingsley Fasesin1, Dr. Ingrid Luffman 1, Dr. Eileen Ernenwein 1 and Dr. Arpita Nandi1
1 Department of Geosciences, College of Arts and Sciences, East Tennessee State University, Johnson City, TN;
Abstract
Predicting infrastructure damage and economic impact of sinkholes along roadways requires mapping of sinkhole distribution and development of a model to predict future occurrences with high accuracy. The study is carried out to define the distribution of sinkholes in Johnson City, TN and risks they pose to roads in the city. The study made use of a 2.5 ft Digital Elevation Model (DEM) derived from Light Detection and Ranging (LiDAR) data acquired from Tennessee Geospatial clearing house (TNGIS) and an inventory of known sinkholes identified from topographic maps. Depressions were identified using the LiDAR-derived DEM by subtracting a filled-depressions DEM from the original study area DEM. Using a spatial join, mapped sinkholes were matched to depression polygons identified from the LiDAR-derived DEM. For all matched sinkhole-polygon pairs, three indices were calculated: circularity index, area ratio of minimum bounding rectangle, and proximity to train tracks and roads. The dataset was partitioned into training (70%) and validation (30%) subsets, and using the training dataset, thresholds for each index were selected using typical values for known sinkholes. These rules were calibrated using the 30% validation subset, and applied as filters to the remaining unmatched depression polygons to identify likely sinkholes. A portion of these suspected sinkholes were field checked. The future direction of this research is to generate a sinkhole formation model for the study area by examining the relationship between the mapped sinkhole distribution, and previously identified sinkhole formation risk factors. These factors include: proximity to fault lines, groundwater and streams; depth to bedrock; and soil and land cover type. Spatial Logistic Regression analysis will be used for model development, and results will be used to generate a sinkhole susceptibility map which will be overlain on the road network to identify the portions of interstate and state highways at risk of sinkhole destruction.
|
48 |
Fashion Object Detection and Pixel-Wise Semantic Segmentation : Crowdsourcing framework for image bounding box detection & Pixel-Wise SegmentationMallu, Mallu January 2018 (has links)
Technology has revamped every aspect of our life, one of those various facets is fashion industry. Plenty of deep learning architectures are taking shape to augment fashion experiences for everyone. There are numerous possibilities of enhancing the fashion technology with deep learning. One of the key ideas is to generate fashion style and recommendation using artificial intelligence. Likewise, another significant feature is to gather reliable information of fashion trends, which includes analysis of existing fashion related images and data. When specifically dealing with images, localisation and segmentation are well known to address in-depth study relating to pixels, objects and labels present in the image. In this master thesis a complete framework is presented to perform localisation and segmentation on fashionista images. This work is a part of an interesting research work related to Fashion Style detection and Recommendation. Developed solution aims to leverage the possibility of localising fashion items in an image by drawing bounding boxes and labelling them. Along with that, it also provides pixel-wise semantic segmentation functionality which extracts fashion item label-pixel data. Collected data can serve as ground truth as well as training data for the aimed deep learning architecture. A study related to localisation and segmentation of videos has also been presented in this work. The developed system has been evaluated in terms of flexibility, output quality and reliability as compared to similar platforms. It has proven to be fully functional solution capable of providing essential localisation and segmentation services while keeping the core architecture simple and extensible. / Tekniken har förnyat alla aspekter av vårt liv, en av de olika fasetterna är modeindustrin. Massor av djupa inlärningsarkitekturer tar form för att öka modeupplevelser för alla. Det finns många möjligheter att förbättra modetekniken med djup inlärning. En av de viktigaste idéerna är att skapa modestil och rekommendation med hjälp av artificiell intelligens. På samma sätt är en annan viktig egenskap att samla pålitlig information om modetrender, vilket inkluderar analys av befintliga moderelaterade bilder och data. När det specifikt handlar om bilder är lokalisering och segmentering väl kända för att ta itu med en djupgående studie om pixlar, objekt och etiketter som finns i bilden. I denna masterprojekt presenteras en komplett ram för att utföra lokalisering och segmentering på fashionista bilder. Detta arbete är en del av ett intressant forskningsarbete relaterat till Fashion Style detektering och rekommendation. Utvecklad lösning syftar till att utnyttja möjligheten att lokalisera modeartiklar i en bild genom att rita avgränsande lådor och märka dem. Tillsammans med det tillhandahåller det även pixel-wise semantisk segmenteringsfunktionalitet som extraherar dataelementetikett-pixeldata. Samlad data kan fungera som grundsannelse samt träningsdata för den riktade djuplärarkitekturen. En studie relaterad till lokalisering och segmentering av videor har också presenterats i detta arbete. Det utvecklade systemet har utvärderats med avseende på flexibilitet, utskriftskvalitet och tillförlitlighet jämfört med liknande plattformar. Det har visat sig vara en fullt fungerande lösning som kan tillhandahålla viktiga lokaliseringsoch segmenteringstjänster samtidigt som kärnarkitekturen är enkel och utvidgbar.
|
49 |
Three Essays in Parallel Machine SchedulingGarg, Amit January 2008 (has links)
No description available.
|
50 |
Développement d'antennes multi-faisceaux multicouches de la bande Ku à la bande V / On the development of multi-beam multilayer antennas from Ku to W bandTekkouk, Karim 03 April 2014 (has links)
Les travaux de cette thèse portent sur la conception d'antennes multi-faisceaux. Ces dernières permettent à plusieurs faisceaux de partager la même partie rayonnante et offrent la possibilité d'avoir simultanément un fort gain et une grande couverture angulaire. Pour leur fonctionnement, ces antennes se basent sur des réseaux à formation de faisceaux, qui peuvent être groupés en deux catégories : les réseaux formateurs de faisceaux de type quasi-optique et les réseaux formateurs de faisceaux de type circuit. Plusieurs structures antennaires reposant sur ces types de réseaux à formation de faisceaux sont proposés dans cette thèse : structures pillbox simples intégrant les deux variantes de la technique mono-pulse pour augmenter la résolution angulaire de l'antenne, lentilles de Rotman bicouche et multicouche, pour le cas quasi-optique ; réseaux phasés pour applications SATCOM (projet ANR) et matrice de Butler avec circuit de contrôle des niveaux de lobes secondaires pour le cas circuit. Les différents concepts ont été étudiés dans différentes bandes de fréquences : Ku, K et V. Pour des raisons de coût essentiellement, deux technologies ont été retenues : La technologie SIW (Substrate Integrated Waveguide), qui associe les avantages de la technologie des circuits imprimés et celles de la technologie guide d'ondes. Des efforts particuliers ont été faits pour l'implémentation de structures multicouches car nous arrivons à ce stade à la limite du savoir faire industriel national dans ce domaine. La technique de « Diffusion Bounding » développée au « Ando and Hirokawa lab » du TIT (Tokyo Institute of Technology) et qui consiste à assembler de fines couches métalliques sous haute température et haute pression. Cette technique permet le développement d'antennes en guides creux avec des efficacités supérieures à 80% en bande millimétrique. / This PhD thesis deals with the design of multi-beam antennas. A single radiating aperture is used to generate several beams with high gain and a large field of view. The multi beam operation is achieved by using two topologies of Beam Forming Networks (BFN): quasi optical BFN, and circuit-based BFN. For each category, several solutions have been proposed and validated experimentally. In particular, for the quasi-optical configurations, pillbox structures, mono-pulse antennas in pillbox technology, and multi-layer Rotman lenses have been considered. On the other hand, for circuit-based multi-beam antennas, two solutions have been analyzed: a phased array for SATCOM applications in the framework of a national ANR project and a Butler matrix with controlled side-lobe levels for the radiated beams within a collaboration with the Tokyo Institute of Technology, Japan. The proposed concepts and antenna solutions have been considered in different frequency bands: Ku, K and V. Two technologies have been mainly adopted for the fabrication of the various prototypes: Substrate Integrated Waveguide technology (SIW) which combines the advantages in terms of cost of the printed circuit board (PCB) fabrication process with the efficiency of classical waveguide technology. Considerable efforts have been devoted to the implementation of multilayer SIW structures to overcome and go beyond the current state of the art at national level on PCB fabrication process. Diffusion Bounding Technique, developed at “Ando and Hirokawa lab” at the Tokyo Institute of Technology, which consists of bonding laminated thin metal plates under high temperature and high pressure. This technique allows the fabrication of planar hollow waveguide structures with efficiencies up to 80% in the millimeter wave-band.
|
Page generated in 0.0833 seconds