161 |
Metodologias da inteligencia computacional em identificação e controle de processos : abordagem nebulosa evolutiva e neuralCoelho, Leandro dos Santos January 1997 (has links)
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnologico / Made available in DSpace on 2012-10-17T00:27:36Z (GMT). No. of bitstreams: 0Bitstream added on 2016-01-08T21:24:04Z : No. of bitstreams: 1
106549.pdf: 11541153 bytes, checksum: 7575f2206b683c6693ce7160231badbb (MD5) / Esta dissertação trata da descrição, projeto e implementação de metodologias da inteligência computacional em identificação e controle de processos, em escala laboratorial, com propósito ao tratamento adequado das restrições e dos requisitos exigidos no projeto de sistemas. As metodologias da inteligência computacional utilizados são: lógica nebulosa, computação evolucionária, redes neurais artificiais, "simulated annealing" e sistemas híbridos inteligentes.
|
162 |
Técnicas de previsão aplicadas ao mercado de informática no BrasilMirapalheta, Gustavo Corrêa 31 January 1994 (has links)
Made available in DSpace on 2010-04-20T20:15:18Z (GMT). No. of bitstreams: 0
Previous issue date: 1994-01-31T00:00:00Z / Sistematizar o estudo das técnicas de previsão relevantes para a análise do mercado de informática no Brasil agrupando-as em um modelo prático, e apresentar os resultados de sua aplicação ou seja o trabalho baseia-se na tendência atual de integração das técnicas disponíveis, em um sistema dinâmico de previsões na empresa
|
163 |
Podpora testování v Microsoft Visual Studio / Software testing in Microsoft Visual StudioPadevět, Daniel January 2007 (has links)
In software development process, there can be misunderstandings, communication problems between individual members of the development team and mistakes when writing code or designing applications. These problems need to be resolved before the application is released into the production environment. To discover these errors it is necessary to properly test the software product. All these activities are engaged in discipline called Software Quality Assurance (SQA), which is an integral part of the software development process. The thesis discusses this discipline and procedures for software testing using the Microsoft Visual Studio 2008. The main objective of this work is to create comprehensive methodology for automated testing of web and Windows applications in Microsoft Visual Studio and to verify that procedures in practice. The reader will learn the theoretical basis of software testing at the beginning of the thesis. In next chapter, there are described various features that Microsoft Visual Studio 2008 offers for software testing. Following chapter describes the applications testing in IBM Rational Functional Tester, which is suitable for Windows applications testing (Microsoft Visual Studio 2008 does not support this kind of tests). In other part of the work -- Case Study -- the reader will learn web and Windows applications testing. Case study can serve as a basis for further methodological manual for teams engaged in software testing. The usage of instruments for testing is explained in real examples. At the end of the work there are set out recommendations for the team collaboration using Microsoft Team Foundation Server, and there are also compared the testing instruments and reviewed functions that will come up with Microsoft Visual Studio 2010.
|
164 |
Analýza Big Data v oblasti zdravotnictví / Big Data analysis in healthcareNováková, Martina January 2014 (has links)
This thesis deals with the analysis of Big Data in healthcare. The aim is to define the term Big Data, to acquaint the reader with data growth in the world and in the health sector. Another objective is to explain the concept of a data expert and to define team members of the data experts team. In following chapters phases of the Big Data analysis according to methodology of EMC2 company are defined and basic technologies for analysing Big Data are described. As beneficial and interesting I consider the part dealing with definition of tasks in which Big Data technologies are already used in healthcare. In the practical part I perform the Big Data analysis task focusing on meteorotropic diseases in which I use real medical and meteorological data. The reader is not only acquainted with the one of recommended methods of analysis and with used statistical models, but also with terms from the field of biometeorology and healthcare. An integral part of the analysis is also information about its limitations, the consultation on results, and conclusions of experts in meteorology and healthcare.
|
165 |
Implementace podpory studia v nástroji IBM Cognos / Implementation of the study support in IBM Cognos toolRajnoch, Adam January 2012 (has links)
In recent years, Business Intelligence is one of the fastest growing software industry according to foreign analyzes. More and more companies realize the importance of tools for Business Intelligence and dashboarding. That is why I chose this topic for my thesis. The main part of this thesis focuses on study support for subject 4IT314 Enterprise Information Systems. This course teaches student how to work with the ERP system Microsoft Dynamics AX. The main shortage of this subject is the lack of feedback to students. Therefore it was decide to integrate BI tools that will be used for dashboarding over database of Microsoft Dynamics AX. In the theoretical part are explains the basic terms essential to understanding BI and dashboarding. On the basis of defined criteria was chose BI tool for integration in to the subject 4IT314. In the practical part of this thesis is described the implementation of this tool and created user documentation, which is used as study materials for students.
|
166 |
Konkurenční analýza předních ICT firem na českém trhu / Competitive analysis of leading ICT companies on the Czech marketDvořák, Oskar January 2012 (has links)
This thesis deals with the field of Competitive Intelligence in relation to the possibilities of application of its methods and tools for competitive analysis of the market environment using modern virtual social networks. Theoretical part focuses on the characteristics of the market environment of ICT companies by using Porter's analysis and then it is focused on the description of selected tools and methods used to processing unstructured data and social networks analysis. The practical part is based on a real project which ran from early March 2013 at IBM Company. Practical part demonstrates current possibilities of information technology in the field of Competitive Intelligence.
|
167 |
CORBA-Integration des Workflow-Management-Systems IBM FlowMarkNeubert, Uwe 20 October 2017 (has links)
Im Rahmen des Projektes 'HematoWork' sollen Workflow-Management-Systeme zur Üerwachung von onkologischen Behandlungsabl'äufen genutzt werden. Im Rahmen des zu schaffenden Gesamtsystems werden neben dem WfMS verschiedene Applikationen eingesetzt. Als Middleware wird im Projekt die Anwendbarkeit von CORBA untersucht. Damit wurde es nötig, das Zusammenspiel von CORBA und WfMS, speziell am Beispiel von IBM FlowMark zu untersuchen. Um Methoden von CORBA-Objekten als Applikationen oder in Übergangsbedingungen im Kontrollfluß von WfMS verwenden zu können, werden in der Arbeit sowohl Aspekte zur Workflow-Definitionszeit als auch zur Laufzeit untersucht. Neben einer neuen, von technischen Aspekten des CORBA- Methodenaufrufs abstrahierenden Workflow-Modellierungssprache und einem entsprechenden Compiler in die Workflow-Modellierungssprache von IBM FlowMark wurde eine Datenbank eingeführt, die dem Management der Lebensdauer und Gültigkeit von Daten, die durch CORBA-Objekte erzeugt oder repräsentiert werden, dient.
|
168 |
SPSS Modeler Integration mit IBM DB2 Analytics AcceleratorNentwig, Markus 27 February 2018 (has links)
Die vorliegende Arbeit beschreibt einen Architekturansatz, der im Rahmen einer Machbarkeitsstudie bei IBM entwickelt wurde. Dadurch wird der IBM DB2 Analytics Accelerator als eine Data-Warehouse-Appliance dazu in die Lage versetzt, über angepasste Schnittstellen Data-Mining-Modelle über entsprechende Algorithmen direkt auf dem Accelerator zu erstellen. Neben dieser Beschreibung wird die bisherige Verwendung des DB2 Analytics Accelerators sowie das zugehörige Umfeld von Datenbanksystemen bis zum System z Mainframe vorgestellt.
Darauf aufbauend werden praxisnahe Anwendungsfälle präsentiert, die unter Anwendung von intelligenten Methoden auf gespeicherten Kundendaten statistische Modelle erstellen. Für diesen Prozess wird die Datengrundlage zuerst vorbereitet und angepasst, um sie dann in dem zentralen Data-Mining-Schritt nach neuen Zusammenhängen zu durchsuchen.
|
169 |
Parameterized Partition Valuation for Parallel Logic SimulationHering, Klaus, Haupt, Reiner, Petri, Udo 01 February 2019 (has links)
Parallelization of logic simulation on register-transfer and gate level is a promising way to accelerate extremely time-extensive system simulation processes during the design of whole processor structures. The background of this paper is given by the functional simulator parallelTEXSIM realizing simulation based on the clock-cycle algorithm over loosely-coupled parallel processor systems. In preparation for parallel cycle simulation, partitioning of hardware models is necessary, which essentially determines the efficiency of the following simulation. We introduce a new method of parameterized partition valuation for use within model partitioning algorithms. It is based on a formal definition of parallel cycle simulation involving a model of parallel computation called Communicating Processors. Parameters within the valuation function permit consideration of specific properties related to both the simulation target architecture and the hardware design to be simulated. Our partition valuation method allows performance estimation with respect to corresponding parallel simulation. This has been confirmed by tests concerning several models of real processors as, for instance, the PowerPC 604 with parallel simulation running on an IBM SP2.
|
170 |
Resource-Constrained Project Scheduling with Autonomous Learning EffectsTicktin, Jordan M 01 December 2019 (has links) (PDF)
It's commonly assumed that experience leads to efficiency, yet this is largely unaccounted for in resource-constrained project scheduling. This thesis considers the idea that learning effects could allow selected activities to be completed within reduced time, if they're scheduled after activities where workers learn relevant skills. This paper computationally explores the effect of this autonomous, intra-project learning on optimal makespan and problem difficulty. A learning extension is proposed to the standard RCPSP scheduling problem. Multiple parameters are considered, including project size, learning frequency, and learning intensity. A test instance generator is developed to adapt the popular PSPLIB library of scheduling problems to this model. Four different Constraint Programming model formulations are developed to efficiently solve the model. Bounding techniques are proposed for tightening optimality gaps, including four lower bounding model relaxations, an upper bounding model relaxation, and a Destructive Lower Bounding method. Hundreds of thousands of scenarios are tested to empirically determine the most efficient solution approaches and the impact of learning on project schedules. Potential makespan reduction as high as 50% is discovered, with the learning effects resembling a learning curve with a point of diminishing returns. A combination of bounding techniques is proven to produce significantly tighter optimality gaps.
|
Page generated in 0.0289 seconds