Spelling suggestions: "subject:"heighted"" "subject:"eighted""
591 |
Decoupled approaches to register and software controlled memory allocations / Approches découplées aux problèmes d'allocations de registres et de mémoires localesDiouf, Boubacar 15 December 2011 (has links)
Malgré la hiérarchie mémoire utilisée dans les ordinateurs modernes, il convient toujours d'optimiser l'utilisation des registres du processeur et des mémoires locales gérées de manières logicielles (mémoires locales) présentes dans beaucoup de systèmes embarqués, de processeurs graphiques (GPUs) et de multiprocesseurs. Lors de la compilation, d'un code source vers un langage machine, deux optimisations de la mémoire revêtent une importance capitale : l'allocation de registres et l'allocation de mémoires locales. Dans ce manuscrit de thèse nous nous intéressons à des approches découplées, qui traitent séparément les problèmes d'allocation et d'assignation, permettant d'améliorer les allocations de registres et de mémoires locales. Dans la première partie de la thèse, nous nous penchons sur le problème de l'allocation de registres. Tout d'abord, nous proposons dans le contexte des compilateurs-juste-à-temps, une allocation de registres fractionnées (split register allocation). Avec cette approche l'allocation de registres est effectuée en deux étapes: une faite durant la phase de compilation statique et l'autre pendant la phase de compilation dynamique. Ce qui permet de réduire le temps d'exécution des programmes avec un impact négligeable sur le temps de compilation. Ensuite Nous introduisons une allocation de registres incrémentale qui permet de résoudre d'une manière quasi-optimale le problème d'allocation. Cette méthode est pseudo-polynomiale alors que le problème d'allocation est NP-complet même à l'intérieur d'un « basic block ». Dans la deuxième partie de la thèse nous nous intéressons au problème de l'allocation de mémoires locales. Au vu des dernières avancées dans le domaine de l'allocation de registres, nous étudions dans quelle mesure le problème d'allocation pourrait être séparé de celui de l'assignation dans le contexte des mémoires locales. Dans un premier temps nous validons expérimentalement que les problèmes d'allocation et d'assignation peuvent être résolus séparément. Ensuite, nous procédons à une étude plus théorique d'une approche découplée de l'allocation de mémoires locales. Cela permet d'introduire de nouveaux résultats sur le « submarine-building problem », une variante du « ship-building problem », que nous avons défini. L'un de ces résultats met en évidence pour la première fois une différence de complexité (P vs. NP-complet) entre les graphes d'intervalles et les graphes d'intervalles unitaires. Dans la troisième partie de la thèse nous proposons une nouvelle heuristique, appelée « clustering allocator » fondée sur la construction de sous-graphes stables d'un graphe d'interférence, permettant de découpler aussi bien le problème d'allocation pour les registres que pour les mémoires locales. Cette nouvelle heuristique se veut le pont qui permettra de réconcilier les problèmes d'allocations de registres et de mémoires locales. / Despite the benefit of the memory hierarchy, it is still essential, in order to reduce accesses to higher levels of memory, to have an efficient usage of registers and local memories (also called scratchpad memories) present in most embedded processors, graphical processors (GPUs) and network processors. During the compilation, from a source language to an executable code, there are two optimizations that are of utmost importance: the register allocation and the local memory allocation. In this thesis's report we are interested in decoupled approaches, solving separately the allocation and assignment problems, that helps to improve the quality of the register and local memory allocations. In the first part of this thesis we are interested in two aspects of the register allocation problem: the improvements of the just-in-time (JIT) register allocation and the spill minimization problem. We introduce the split register allocation which leverages the decoupled approach to improve register allocation in the context of JIT compilation. We experimentally validate the effectiveness of split register allocation and its portability with respect to register count variations, relying on annotations whose impact on the bytecode size is negligible. We introduce a new decoupled approach, called iterated-optimal allocation, which focus on the spill minimization problem. The iterated-optimal allocation algorithm achieves results close to optimal while offering pseudo-polynomial guarantees for SSA programs and fast allocations on general programs. In the second part of this thesis, we study how a decoupled local memory allocation can be proposed in light of recent progresses in register allocation. We first validate our intuition for decoupled approach to local memory allocation. Then, we study the local memory allocation in a more theoretical way setting the junction between local memory allocation for linearized programs and weighted interval graph coloring. We design and analyze a new variant of the ship-building problem called the submarine-building problem. We show that this problem is NP-complete on interval graphs, while it is solvable in linear time for proper interval graphs, equivalent to unit interval graphs. The submarine-building problem is the first problem that is known to be NP-complete on interval graphs, while it is solvable in linear time for unit interval graphs. In the third part of this thesis, we propose a heuristic-based solution, the clustering allocator, which decouples the local memory allocation problem and aims to minimize the allocation cost. The clustering allocator while devised for local memory allocation, it appears to be a very good solution to the register allocation problem. After many years of separation, this new algorithm seems to be a bridge to reconcile the local memory allocation and the register allocation problems.
|
592 |
Mapeamento pedológico digital via regressão geograficamente ponderada e lógica booleana: uma estratégia integrada entre dados espectrais terrestres e de satélite / Digital pedological mapping by geographically weighted regression and boolean logic: an integrated strategy between terrestrial and satellite spectral dataLuiz Gonzaga Medeiros Neto 10 February 2017 (has links)
Mapas pedológicos são importantes fontes de informação necessárias à agricultura, mas praticamente inexistentes em escalas adequadas para o Brasil, e seu levantamento pelo método convencional para a demanda brasileira é inviável. Como alternativa ao problema, mapeamento pedológico digital apresenta-se como uma área do conhecimento que envolve as relações das informações de campo, laboratório e pontuais de solos com métodos quantitativos via imagens de satélite e atributos do relevo para inferir atributos e classes. A literatura destaca, portanto, a importância do estudo da posição espacial de pontos amostrais na estimativa de atributos do solo a partir dos valores espectrais de imagens de satélite, aliado a isso, faz-se importante o cruzamento dos atributos do solo estimados e espacializados para chegar a classes de solo. Face ao exposto, o objetiva-se o desenvolvimento de uma técnica via imagem de satélite, dados espectrais e atributos do relevo, integrados por lógica booleana, para determinar mapas pedológicos. O trabalho foi realizado no município de Rio das Pedras, SP e entornos, numa área total de 47.882 ha. Onde, realizou-se processamento de imagens de satélites multitemporais, para obtenção da informação espectral da superfície de solo exposto. Esta informação foi correlacionada com espectro de laboratório de pontos amostrais em subsuperfície (profundidade 80-100 cm) e estimou-se os espectros simulando bandas de satélite para locais desconhecidos. Elaborou-se uma chave de classificação de solos por cruzamento de mapas de atributos via lógica booleana, onde definiu os seguintes atributos a serem mapeados: argila, V% e matéria orgânica (M.O) na profundidade 0-20 cm e argila, CTC, V%, m%, Al, ferro total, matiz, valor e croma na profundidade 80-100 cm. As estimativas de espectros em subsuperfície e dos atributos dos solos nas duas profundidades foram realizadas pela técnica multivariada regressão geograficamente ponderada (GWR), que teve seu desempenho preditivo avaliado pela comparação com desempenho preditivo da técnica de regressão linear múltipla (MRL). Os resultados mostraram correlação entre os espectros das duas profundidades, com R2 de validação acima 0.6. Argila (0-20 e 80-100 cm), matiz, valor e croma foram os atributos do solo que obtiveram as melhores estimativas com R2 acima 0.6. A técnica multivariada GWR obteve-se desempenho superior ao MRL. O mapa pedológico digital comparado aos mapas de solos detalhados de levantamentos convencionais obteve índice kappa de 34.65% e acurácia global de 54,46%. Tal resultado representa um nível regular de classificação. Por outro lado, deve se considerar que se trata de uma região de alta complexidade geológica e compreendendo heterogeneidade de solos. A técnica desenvolvida mostra-se com potencial de evolução no mapeamento digital de solos à medida que forem evoluindo as estimativas de atributos de solos e ajustes nos critérios da chave de classificação. / Soil maps are important sources of information necessary for agriculture, but practically absent in appropriate scales for Brazil, and its mapping by the conventional method for the brazilian demand is impracticable. How an alternative to the problem, digital pedological mapping appears as an area of knowledge that involves the relationship of field information, laboratory and point of soils with quantitative methods by satellite images and relief attributes to predict attributes and classes. The literature highlights therefore the importance of studying the spatial position of sampling points in the estimation of soil attributes from spectral values of satellite images, combined to this, is an important the crossing of the estimated and spatialized soil attributes to get the soil classes. In view of exposed, the objective is the development of a technique satellite image, spectral data and attributes of relief, integrated by boolean logic to determine soil maps. The work was carried out in Rio das Pedras county, SP, and surroundings, in a total area of 47,882 ha. Which was held processing multitemporal satellite images, to obtain spectral information of exposed soil surface. This information was correlated with laboratory spectra of sample points in the subsurface (depth 80-100 cm) and was estimated spectra simulating satellite bands to unknown locations. Produced is a soil classification key for cross attribute maps by boolean logic, which defines the following attributes to be mapped: clay, cation saturation and organic matter (OM) in the 0-20 cm depth and clay, CEC, cation saturation, aluminiu saturation, Al, total iron, hue, value and chroma in depth 80-100 cm. The estimates spectra subsurface and soil attributes in two depths were performed by multivariate technique geographically weighted regression (GWR), which had its predictive performance is evaluated by comparison with predictive performance of multiple linear regression (MRL). The results showed a correlation between the spectra of the two depths, with validation R2 above 0.6. Clay (0-20 and 80-100 cm), hue, value and chroma were the soil attributes obtained the best estimates R2 above 0.6. The GWR multivariate technique yielded better performance than MRL. The digital soil map compared to the detailed soil maps of conventional surveys obtained kappa index of 34.65% and overall accuracy of 54.46%. This result is a regular level of classification. On the other hand, it must be considered that it is a highly complex geological region and comprising heterogeneity of soils. The technique developed shows with potential developments in digital soil mapping as they evolve estimates of soil attributes and adjustments to the classification key criteria.
|
593 |
Aplicação do método de Taguchi para redução de porosidade de peças fundidas sob pressãoDenilson José Viana 28 September 2012 (has links)
O processo de fundição sob pressão de alumínio tem se desenvolvido significativamente nas últimas décadas, ocupando um lugar de destaque na indústria por produzir componentes de engenharia inovadora. Dentre os problemas de qualidade deste processo, o mais recorrente é a porosidade causada por vários fatores, dentre eles os parâmetros do processo que são de difícil determinação, sendo comumente selecionados por meio da abordagem de tentativa e erro. A presente dissertação buscou responder a pergunta: Como determinar a melhor configuração de parâmetros do processo de fundição sob pressão de alumínio para minimizar a porosidade nas peças produzidas? Tendo como objetivo a melhoria da qualidade de uma peça de alumínio, fundida sob pressão, por meio da redução da porosidade. A principal contribuição desta dissertação concentra-se na aplicação do método de Taguchi utilizando dados categóricos ordinais (classes de porosidades) como característica de qualidade, por meio da análise da relação sinal-ruído ponderada. Os resultados dos experimentos foram analisados a partir do efeito médio dos fatores e da análise de variância (ANOVA). Como conclusão os parâmetros de temperatura do metal e velocidade de primeira e segunda fase de injeção foram os mais significativos na redução da porosidade da peça estudada. E ainda, o método de Taguchi alcançou o resultado esperado, trazendo significativa redução de porosidade na peça estudada por meio da otimização dos parâmetros do processo. / The aluminum die casting process has developed significantly in recent decades, occupying a prominent place for producing innovative engineering components. Among quality problems of this process is porosity due to several factors, including the process parameters that are difficult to determine, and are commonly selected by trial and error approach. This paper sought to answer the question: How to determine the best set of parameters of the aluminum die casting process to minimize porosity in the parts produced? Aiming improving die casting aluminum parts quality through reducing porosity. The main contribution of this paper focuses on the application of Taguchi method using ordinal categorical data (porosity classes) as a quality characteristic, by analyzing the weighted signal-to-noise ratio. The experimental results were analyzed from the average effect of factors and analysis of variance (ANOVA). In conclusion parameters metal temperature and speed of the first and second injection phase were the most significant in reducing the porosity of the part studied. Also, the Taguchi method achieved the expected result, bringing significant reduction of porosity in the part studied by optimizing the process parameters.
|
594 |
Comparação de algoritmos para o Problema dos K Menores Caminhos / Comparison of algorithms for K Shortest Paths ProblemDiogo Haruki Kykuta 19 February 2018 (has links)
O Problema dos K Menores Caminhos é uma generalização do Problema do Menor Caminho, em que desejamos encontrar os K caminhos de menor custo entre dois vértices de um grafo. Estudamos e implementamos algoritmos que resolvem esse problema em grafos dirigidos, com peso nos arcos e que permitem apenas caminhos sem repetição de vértices na resposta. Comparamos seus desempenhos utilizando grafos do 9th DIMACS Implementation Challenge. Identificamos os pontos fortes e fracos de cada algoritmo, e propusemos uma variante híbrida dos algoritmos de Feng e de Pascoal. Essa variante proposta obteve desempenho superior aos algoritmos base em alguns grafos, e resultado superior a pelo menos um deles na grande maioria dos testes. / The K-Shortest Path Problem is a generalization of the Shortest Path Problem, in which we must find the K paths between two vertices in a graph that have the lowest costs. We study some K-Shortest Path Problem algorithms applied to weighted directed graphs, allowing only paths with no repeated vertices. We compare empirically implementation of some algorithms, using instance graphs from the 9th DIMACS Implementation Challenge. We identify the strengths and weaknesses of each algorithm, and we propose a hybrid version of Feng\'s and Pascoal\'s algorithms. This proposed variant achieve better perfomance compared to both base algorithms in some graphs, and it is better than at least one of them in most cases.
|
595 |
Deep Learning for Point Detection in ImagesRunow, Björn January 2020 (has links)
The main result of this thesis is a deep learning model named BearNet, which can be trained to detect an arbitrary amount of objects as a set of points. The model is trained using the Weighted Hausdorff distance as loss function. BearNet has been applied and tested on two problems from the industry. These are: From an intensity image, detect two pocket points of an EU-pallet which an autonomous forklift could utilize when determining where to insert its forks. From a depth image, detect the start, bend and end points of a straw attached to a juice package, in order to help determine if the straw has been attached correctly. In the development process of BearNet I took inspiration from the designs of U-Net, UNet++ and a high resolution network named HRNet. Further, I used a dataset containing RGB-images from a surveillance camera located inside a mall, on which the aim was to detect head positions of all pedestrians. In an attempt to reproduce a result from another study, I found that the mall dataset suffers from training set contamination when a model is trained, validated, and tested on it with random sampling. Hence, I propose that the mall dataset is evaluated with a sequential data split strategy, to limit the problem. I found that the BearNet architecture is well suited for both the EU-pallet and straw datasets, and that it can be successfully used on either RGB, intensity or depth images. On the EU-pallet and straw datasets, BearNet consistently produces point estimates within five and six pixels of ground truth, respectively. I also show that the straw dataset only constitutes a small subset of all the challenges that exist in the problem domain related to the attachment of a straw to a juice package, and that one therefore cannot train a robust deep learning model on it. As an example of this, models trained on the straw dataset cannot correctly handle samples in which there is no straw visible.
|
596 |
Tractographie globale sous contraintes anatomiques / Global tractography constrained by anatomical priorsTeillac, Achille 16 October 2017 (has links)
Ce travail vise au développement d’une méthode d’inférence des fibres de la substance blanche cérébrale fondée sur l’utilisation d’une approche globale de type « verres de spins » sous contraintes anatomiques. Contrairement aux méthodes classiques reconstituant les fibres indépendamment les unes des autres, cette approche markovienne reconstruit l’ensemble des fibres dans un unique processus de minimisation d’une énergie globale dépendant de la configuration des spins (position, orientation, longueur et connexion(s)) et de leur adéquation avec le modèle local du processus de diffusion, afin d'améliorer la robustesse et la réalité anatomique des fibres reconstruites. Le travail mené dans le cadre de cette thèse a donc consisté, en plus du développement de l’algorithme de tractographie, à étudier la possibilité de le contraindre à l’aide d’a priori anatomiques provenant de l’imagerie anatomique pondérée en T1 et des nouvelles approches de microscopie par IRM de diffusion fournissant des informations de nature micro-structurelle sur le tissu. En particulier, l’algorithme a été conçu pour autoriser une forte courbure des fibres à l’approche du ruban cortical et permettre leur connexion au sommet des gyri, mais également sur leurs flancs. Le modèle NODDI (Neurite Orientation Dispersion and Density Imaging) a gagné en popularité au cours des dernières années grâce à sa compatibilité avec une utilisation en routine clinique et permet de quantifier la densité neuritique et la dispersion angulaire des axones. Une forte dispersion traduit l’existence de populations de fibres d’orientations différentes ou une forte courbure d’un même faisceau de fibres au sein d'un voxel. Elle est donc exploitée pour relâcher la contrainte de faible courbure à proximité du cortex cérébral dans notre approche de tractographie globale lorsque cette dispersion angulaire est forte, permettant aux fibres de s'orienter par rapport à la normale locale au cortex. Cette contrainte est en revanche supprimée si la dispersion angulaire reste faible, indiquant une trajectoire à plus faible courbure, à l’instar des fibres se projetant dans le fond du gyrus ou des fibres en U. Les performances de cette nouvelle approche de tractographie sous contraintes anatomiques ont été évaluées à partir de données simulées, et ont été testées sur des données IRM post-mortem de très haute résolution et sur des données IRM in vivo de résolution millimétrique. En parallèle de ce développement méthodologique, une étude des corrélats locaux-régionaux de la densité neuritique et de l’activation cérébrale à la surface du cortex a été réalisée. L'étude a été menée sur la cohorte de sujets sains scannés dans le cadre du projet européen CONNECT dotée de données anatomiques, de diffusion et fonctionnelles reposant sur l’utilisation de paradigmes explorant en particulier les réseaux de la motricité, du langage et de la vision. Les données anatomiques ont permis d’extraire la surface piale et une parcellisation surfacique du cortex de chaque individu, les données de diffusion ont permis l’évaluation des cartographies individuelles de la densité neuritique au sein du ruban cortical et les données fonctionnelles du phénomène BOLD (Blood Oxygen Level Dependent) ont permis le calcul des cartographies individuelles des z-scores du modèle linéaire général pour différents contrastes. Une colocalisation des maxima de la densité neuritique et des pics d'activation a été observée, pouvant être interprétée comme une augmentation de la densité neuritique au sein des réseaux fonctionnels afin d'en améliorer l'efficacité. L’étude a également corroboré la latéralisation du réseau fonctionnel du langage et de la motricité, en accord avec la latéralisation de la population scannée tandis qu'une augmentation de la densité neuritique dans le cortex visuel droit a été observée pouvant être corrélée à des résultats d’étude de l’attention visuo-spatiale reportée dans la littérature chez le primate non-humain. / This work aims at developing a method inferring white matter fibers reconstructed using a global spin-glass approach constrained by anatomical prior knowledge. Unlike usual methods building fibers independently from one another, our markovian approach reconstructs the whole tractogram in an unique process by minimizing the global energy depending on the spin glass configuration (position, orientation, length and connection(s)) and the match with the local diffusion process in order to increase the robustness and the accuracy of the algorithm and the anatomical reliability of the reconstructed fibers. Thus, the work done during this PhD, along with the development of the global tractography algorithm, consisted in studying the feasibility of the anatomical prior knowledge integration stemming from the T1 weighted MRI and from new diffusion MRI microstructure approaches providing microstructural information of the surrounding tissue. In particular, the algorithm was built to allow a high fiber curvature when getting closer to the cortical ribbon and thus enabling the connection not only at the end of the gyri but also on their sides. The NODDI (Neurite Orientation Dispersion and Density Imaging) model has become more and more popular during the past years thanks to its capability to be used in clinical routine and allows to quantify neurite density and axons angular dispersion. A high dispersion means the existence of different fibers population or a high curvature of a fascicle within a voxel. Thus, the orientation dispersion has been used in our global tractography framework to release the curvature constraint near the cerebral cortex when the angular dispersion is high, allowing fibers to orientate collinear to the local normal to the cortical surface. However, this constraint is removed if the angular dispersion stays low, meaning a low curvature fiber trajectory following the example of the fibers projecting to the end of a gyrus or the U-fibers. The performances of this new tractography approach constrained by anatomical prior knowledge have been evaluated on simulated data, and tested on high resolution post-mortem MRI acquisitions and millimetric resolution in vivo MRI acquisitions. In parallel of this methodological development, a study about local-regional correlations between neurite density and cerebral activation on the cortical surface has been made. This study has been conducted on the healthy volunteers cohort scanned in the frame of the European CONNECT project including anatomical, diffusion and functional data. The anatomical data has been used to extract the pial surface and an individual parcellation on the cortical surface for each volunteer, the diffusion data has been used to evaluate the individual maps of neurite density within the cortical ribbon and the functional data from the BOLD (Blood Oxygen Level Dependent) effect has been used to calculate the individual z-scores of the general linear model for specific contrasts investigating the motor, language and visual networks. A co-localization of neurite density and activation peaks has been observed, which might indicate an increase of the neurite density within functional networks in order to increase its efficiency. This study also corroborates the lateralization of the language functional network and the motor one, in good agreement with the population lateralization, while an increase of the neurite density in the visual cortex has been observed which might be correlated to the results of visuo-spatial attention studies described in the literature on the non-human primate.
|
597 |
Formal approaches to multi-resource sharing scheduling / Approches formelles de la planification du partage de plusieurs ressourcesRahimi, Mahya 08 December 2017 (has links)
L'objectif principal de cette thèse est de proposer une approche efficace de modélisation et de résolution pour le problème d’ordonnancement, en mettant l’accent sur le partage multi-ressources et sur l’incertitude potentielle d’occurrence de certains événements. L'ordonnancement a pour objectif de réaliser un ensemble de tâches à la fois en respectant des contraintes prédéfinies et en optimisant le temps. Ce travail s’intéresse en particulier à la minimisation du temps total d’exécution. La plupart des approches existantes préconisent une modélisation mathématique exprimant des équations et des contraintes pour décrire et résoudre des problèmes d’ordonnancement. De telles démarches ont une complexité inhérente. Cependant dans l’industrie, la tâche de planification est récurrente et peut requérir des changements fréquents des contraintes. Outre cela, la prise en compte d’événements incertains est peu supportée par les approches existantes; cela peut toutefois augmenter la robustesse d’un ordonnancement. Pour répondre à ces problématiques, après une introduction, le chapitre 2 aborde le problème de l’ordonnancement à travers une démarche de modélisation visuelle, expressive et formelle, s’appuyant sur les automates pondérés et sur la théorie des automates temporisés. L’originalité des modèles proposés réside aussi dans leur capacité de décrire le partage de ressources multiples et proposer une approche de résolution efficace. Ces modèles ont l’avantage d’être directement exploitables par des outils de vérification formelle, à travers une démarche de preuve par contradiction vis-à-vis de l’existence d’une solution. Les résultats effectifs sont obtenus grâce à l’outil UPPAAL. La complexité inhérente à la production d’une solution optimale est abordée à travers un algorithme de recherche et d’amélioration itérative de solutions, offrant une complexité très prometteuse sur la classe de problèmes étudiés. Dans le chapitre 3, une composition synchrone est d’automates pondérés est proposée dans le but de résoudre le problème d’ordonnancement en effectuant une analyse d’atteignabilité optimale directement sur les modèles automates pondérés. Dans le quatrième chapitre, divers comportements incontrôlables tels que le temps de début, la durée de la tâche et l'occurrence d’échec dans un problème d‘ordonnancement sont modélisés par des automates de jeu temporisés. Ensuite, le problème est résolu en effectuant une synthèse de stratégie optimale dans le temps dans l'outil de synthèse TIGA. / The objective of scheduling problems is to find the optimal performing sequence for a set of tasks by respecting predefined constraints and optimizing a cost: time, energy, etc. Despite classical approaches, automata models are expressive and also robust against changes in the parameter setting and against changes in the problem specification. Besides, few studies have used formal verification approaches for addressing scheduling problems; yet none of them considered challenging and practical issues such as multi-resource sharing aspect, uncontrollable environment and reaching the optimal schedule in a reasonable time for industrializing the model. The main objective of this thesis is to propose an efficient modeling and solving approach for the scheduling problem, considering multi-resource sharing and potential uncertainty in occurrence of certain events. For this purpose, after an introduction in Chapter 1, Chapter 2 addresses the problem of scheduling through a visual, expressive and formal modeling approach, based on weighted automata and the theory of timed automata. The originality of the proposed approach lies in ability of handling the sharing of multiple resources and proposing an efficient solving approach. The proposed models have the advantage of being directly exploitable by means of formal verification tools. The results are obtained using the UPPAAL tool. To solve the problem, an algorithm is developed based on iterating reachability analysis to obtain sub-optimal makespan. Results show the proposed model and solving approach provides a very promising complexity on the class of studied problems and can be applied to industrial cases. In Chapter 3, a synchronous composition of weighted automata is proposed to solve the scheduling problem by performing an optimal reachability analysis directly on the weighted automata models. In the fourth chapter, various uncontrollable behaviors such as the start time, the duration of the task and the failure occurrence in a scheduling problem are modeled by timed game automata. Then, the problem is solved by performing an optimal strategy synthesis over time in TIGA as a synthesis tool.
|
598 |
Stanovení hodnoty vybrané společnosti / Value Estimating of a Specific CompanyJankechová, Simona January 2017 (has links)
The aim of this thesis is to determine the value of the company Technicky skusobny ustav based in Piešťany. The first part of the work deals with the theoretical basis for the evaluation of the company, there are described the reasons why to evaluate companies, and further process to apply for the valuation. The following is a description of selected parts of the strategic and financial analysis, financial plan and evaluation methods. Furthermore, there will be presented the chosen company and subsequently evaluated according to methods in the theoretical part. For determining the value of the company there were used discounted free cash flow method and economic value added method. In the conclusion specific evaluation methods are applied and statement stated about the value of the company dated to 31.12.2015.
|
599 |
Porovnávání jazyků a redukce automatů používaných při filtraci síťového provozu / Comparing Languages and Reducing Automata Used in Network Traffic FilteringHavlena, Vojtěch January 2017 (has links)
The focus of this thesis is the comparison of languages and the reduction of automata used in network traffic monitoring. In this work, several approaches for approximate (language non-preserving) reduction of automata and comparison of their languages are proposed. The reductions are based on either under-approximating the languages of automata by pruning their states, or over-approximating the language by introducing new self-loops (and pruning redundant states later). The proposed approximate reduction methods and the proposed probabilistic distance utilize information from a network traffic. Formal guarantees with respect to a model of network traffic, represented using a probabilistic automaton are provided. The methods were implemented and evaluated on automata used in network traffic filtering.
|
600 |
Určování hodnoty podniku / Business ValuationBalgová, Michaela January 2020 (has links)
Master´s thesis deals with the determination of the value of the company Českomoravský cement, a.s. The theoretical part defines concepts and definitions that are subsequently used in the analytical part of the thesis. There is also strategic analysis, financial analysis, financial plan, discount rate estimation and subsequent value of the company as an analytical part.
|
Page generated in 0.0511 seconds