• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 24
  • 23
  • 15
  • 13
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 32
  • 31
  • 29
  • 28
  • 25
  • 22
  • 21
  • 20
  • 18
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Software test case generation from system models and specification : use of the UML diagrams and high level Petri nets models for developing software test cases

Alhroob, Aysh Menoer January 2010 (has links)
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
92

中國大陸外商直接投資之決定性因素實證

林俊儒 Unknown Date (has links)
本研究採取固定效果( fixed effect ) panel data模型,分析影響中國大陸外商直接投資( FDI )的決定性因素,共分為三個面向進行實證,第一個面向係按FDI投資區域之不同,而分為東部沿海與中西部地區進行估計;第二個面向則按FDI投資產業之不同,分為三大產業進行分析;第三個面向係以FDI之來源地區為依據,共分為八個主要FDI來源地區探討其決定性因素。 在分區估計方面,透過一般化最小平方法( generalized least squares, GLS )進行估計,得到的結果顯示,東部地區係以市場區位來吸引FDI前來投資,例如市場商機與較高的對外開放程度;而中西部地區則以生產區位來吸引FDI,如低廉的勞動成本、地方基礎建設水準,換言之,兩地區之間在影響FDI的因素上完全不同,另一方面,針對固定效果的觀察發現,地理位置位在東部地區的省市,固定效果明顯高於中西部地區,因此東部地區的確比中西部地區具有較佳的條件以吸引FDI。 若按FDI所投資的三大產業進行實證,以探討FDI之決定性因素,其估計結果顯示,第一產業( 農、林、漁、牧業 ) FDI的影響因素為對外開放程度,具有負面排擠FDI的效果,而第二產業( 製造業、建築業等 ) FDI的影響因素則為市場規模、工資率、高品質勞動力供應與對外開放程度,除工資率為負面影響外,其他因素皆為正向效果,第三產業( 服務業 )因樣本期間中國大陸尚未放寬外資的進入限制,故僅有基礎建設與高品質勞動力供應是影響FDI的決定性因素。 在中國大陸不同來源國FDI之決定性因素方面,實證結果發現,中國大陸高速的經濟成長率會吸引美國、新加坡、台灣的廠商前來投資,低廉的工資水準則形成對美國、南韓、新加坡以及台灣的吸引力,而中國大陸若加強研發能力,可促使美國、日本、南韓、新加坡、台灣增加直接投資,但廠商過度集中所導致的高度競爭環境,卻會排擠掉英國、香港的FDI,本研究也考慮高品質勞動供應的影響,以香港、台灣廠商較注重高品質勞動力是否充分供應,另一方面,對外開放程度的高低對日本、新加坡、香港、台灣的廠商具有正面效果,對德國廠商則有負面影響。 隨著中國大陸加入世界貿易組織( WTO )後,中國政府對外資的政策勢必更加開放,在中國大陸龐大市場與廉價生產要素的誘惑下,預期會有更多外國資金投入中國大陸,中國做為世界工廠的地位將更加穩固,對我國而言,面對中國經濟崛起所導致的全球經濟整合以及產業分工趨勢,我國必須採取正面的態度來思考中國大陸問題,善用本身所具備的充沛資金與研發能力,以及與中國大陸同文同種的優勢,積極進入大陸市場,一方面應用其廉價勞動力,一方面搶佔大陸市場佔有率,將中國大陸做為我國經濟實力的延伸,我國廠商方能在國際競爭潮流下取得生存利基,進而從中國市場打入全球市場,則我國未來的經濟前景必將大有可為!
93

Růst úvěru ve střední a východní Evropě / Credit Growth in Central and Eastern Europe

Němcová, Helena January 2012 (has links)
This thesis focuses on the development of credit to the private sector in the Central and Eastern European (CEE) countries. Although the speed of credit growth in these countries has recently slowed down as the consequence of the global financial crisis, the overall increase in credit to the private sector over the past decades has been immense. As a result, the thesis examines whether this substantial increase in credit is linked to the convergence of the CEE countries towards the equilibrium or whether it represents an excessive credit growth that could threaten the macroeconomic and financial stability in these countries. We estimate the equilibrium credit levels for 11 transition countries by applying a dynamic panel data model. Since in-sample approach may bias the estimation results we perform the estimates out-of-sample using a panel of selected developed EU countries as a benchmark. The difference between the actual and estimated credit-to-GDP ratios serves as a measure of private credit excessiveness. The results indicate a slightly excessive or close to the equilibrium credit-to-GDP ratios in Bulgaria, Estonia, and Latvia prior to the financial crisis. With regard to the significant decline in GDP during the crisis this measure of credit excessiveness in these countries have further increased.
94

[en] MODELING LEARNING OBJECTS COMPOSITION / [pt] MODELAGEM DE COMPOSIÇÃO DE OBJETOS DE APRENDIZAGEM

DIVA DE SOUZA E SILVA 12 July 2006 (has links)
[pt] O desenvolvimento de conteúdos instrucionais utilizando as novas tecnologias de informação é um processo caro, demorado e complexo, que aponta para o estabelecimento de novas metodologias. É neste contexto que surge o conceito de Objeto de Aprendizagem (LO), cujo enfoque está em promover a reutilização do conteúdo. Entretanto, ao considerar o reuso de conteúdo, também se observa uma necessidade de seqüência - lo para formar conteúdos mais elaborados ou mais complexos. Nesta tese adota-se uma estratégia de representar LOs cada vez menores, representando separadamente conteúdo e prática, aqui denominados Objetos Componentes (OCs). Para a estruturação do conteúdo, adaptou-se uma proposta já existente e definiu-se um esquema conceitual adequado à representação de atividades (ou práticas) de aprendizagem. Com vista à composição dos OCs, foi igualmente definido um esquema conceitual envolvendo conteúdos e práticas. Assim, com base em um algoritmo de seqüenciamento de OCs, um professor pode compreender melhor a forma de implementar um objeto complexo, como uma aula ou um curso, reduzindo erros e eventuais omissões na implementação da solução. Este seqüenciamento deve seguir uma metodologia e deve ser especificado de modo não ambíguo. É neste contexto que também é apresentada uma linguagem para especificação de seqüências de objetos de aprendizagem, com uma sintaxe adequada à descrição das possíveis formas de seqüenciamento de LOs. Finalmente, descreve-se um estudo de caso ilustrando a utilização dos esquemas conceituais desenvolvidos, do algoritmo proposto e da linguagem de especificação de seqüências OCs. / [en] The development of instructional content using new Information Technologies is an expensive, time-consuming and complex process that leads to the development of new methodologies. It was in this context that the concept of Learning Objects (LOs) was proposed as an approach that promotes content reuse. However, if content is expressed as small LOs, it is also necessary to sequence them in order to build more elaborated and complex content. In this thesis we adopt a strategy to represent smaller LOs, modeling not only content but also practice, called Component Objects (COs) herein. In order to structure content we adapted an existing proposal and defined a conceptual schema to structure learning practices (or activities). We also defined a conceptual schema for composing these COs. Then, based on these conceptual schemas it was possible to propose an algorithm for sequencing COs, which supports a teacher/professor to better control the implementation of a complex content such as a class or a course, thus reducing errors and eventual omissions in its implementation. The sequencing process must follow a methodology and must be specified in a nonambiguous way. It is in this context that we also present a specification language for sequences of LOs, with a syntax that is adequate to the description of the possible ways of sequencing LOs. Finally, we describe a case study that shows the conceptual schemas that were proposed and the use of the sequencing algorithm and the specification language.
95

Formale Semantik des Datentypmodells von SDL-2000

Menar, Martin von Löwis of 18 December 2003 (has links)
Mit der aktuellen Überarbeitung der Sprache SDL (Specification and Description Language) der ITU-T wurde die semantische Fundierung der formalen Definition dieser Sprache vollständig überarbeitet; die formale Definition basiert nun auf dem Kalkül der Abstract State Machines (ASMs). Ebenfalls neu definiert wurde das um objekt-orientierte Konzepte erweiterte Datentypsystem. Damit musste eine formale semantische Fundierung für diese neuen Konzepte gefunden werden. Der bisher verwendete Kalkül ACT.ONE sollte nicht mehr verwendet werden, da er schwer verwendbar, nicht implementierbar und nicht auf Objektsysteme erweiterbar ist. In der vorliegenden Arbeit werden die Prinzipien einer formalen Sprachdefinition dargelegt und die Umsetzung dieser Prinzipien für die Sprache SDL-2000 vorgestellt. Dabei wird erläutert, dass eine konsistente Sprachdefinition nur dadurch erreicht werden konnte, dass die Definition der formalen Semantik der Sprache parallel mit der Entwicklung der informalen Definition erfolgte. Dabei deckt die formale Sprachdefinition alle Aspekte der Sprache ab: Syn-tax, statische Semantik und dynamische Semantik. Am Beispiel der Datentypsemantik wird erläutert, wie jeder dieser Aspekte informal beschrieben und dann formalisiert wurde. Von zentraler Rolle für die Anwendbarkeit der formalen Semantikdefinition in der Praxis ist der Einsatz von Werkzeugen. Die Arbeit erläutert, wie aus der formalen Sprachdefinition voll-automatisch ein Werkzeug generiert wurde, das die Sprache SDL implementiert, und wie die durch die Umsetzung der formalen Semantikdefinition in ein Werkzeug Fehler in dieser Definition aufgedeckt und behoben werden konnten. / With the latest revision of ITU-T SDL (Specification and Description Language), the semantic foundations of the formal language definition were completely revised; the formal definition is now based on the calculus of Abstract State Machines (ASMs). In addition, the data type system of SDL was revised, as object-oriented concepts were added. As a result, a new semantical foundation for these new concepts had to be defined. The ACT.ONE calculus that had been used so far was not suitable as a foundation any more, as it is hard to use, unimplementable and not extensible for the object oriented features. In this thesis, we elaborate the principles of a formal language definition, and the realisation of these principles in SDL-2000. We explains that a consistent language definition can only be achieved by developing the formal semantics definition in parallel with the development of the informal definition. The formal language definition covers all aspects of the language: syntax, static semantics, and dynamic semantics. Using the data type semantics as an example, we show how each of these aspects is informally described, and then formalized. For the applicability of the formal semantics definition for practitioners, usage of tools plays a central role. We explain how we transform the formal language definition fully automatically into a tool that implements the language SDL. We also explain how creating the tool allowed us to uncover and correct errors in the informal definition.
96

Analyse des Straßenverkehrs mit verteilten opto-elektronischen Sensoren

Schischmanow, Adrian 14 November 2005 (has links)
Aufgrund der steigenden Verkehrsnachfrage und der begrenzten Resourcen zum Ausbau der Straßenverkehrsnetze werden zukünftig größere Anforderungen an die Effektivität von Telematikanwendungen gestellt. Die Erhebung und Bereitstellung aktueller Verkehrsdaten durch geeignete Sensoren ist dazu eine entscheidende Voraussetzung. Gegenstand dieser Arbeit ist die großflächige Analyse des Straßenverkehrs auf der Basis bodengebundener und verteilter opto-elektronischer Sensoren. Es wird ein Konzept vorgestellt, dass eine von der Bilddatenerhebung bis zur Bereitstellung der Daten für Verkehrsanwendungen durchgehende Verarbeitungskette enthält. Der interdisziplinäre Ansatz bildet die Basis zur Verknüpfung eines solchen Sensorsystems mit Verkehrstelematik. Die Abbildung des Verkehrsgeschehens erfolgt im Gegensatz zu herkömmlichen bodengebundenen Messsystemen innerhalb größerer zusammenhängender Ausschnitte des Verkehrsraums. Dadurch können streckenbezogene Verkehrskenngrößen direkt bestimmt werden. Die Georeferenzierung der Verkehrsobjekte ist die Grundlage für eine optimale Verkehrsanalyse und Verkehrssteuerung. Die generierten Daten sind Basis zur Findung und Verifizierung von Theorien und Modellen sowie zur Entwicklung verkehrsadaptiver Steuerungsverfahren auf mikroskopischer Ebene. Es wird gezeigt, wie aus der Fusion gleichzeitig erhaltener Daten mehrerer Sensoren, die im Bereich des Sichtbaren und im thermalen Infrarot sensitiv sind, ein zusammengesetztes Abbildungsmosaik eines vergrößerten Verkehrsraums erzeugt werden kann. In diesem Abbildungsmosaik werden Verkehrsdatenmodelle unterschiedlicher räumlicher Kategorien abgeleitet. Die Darstellung des Abbildungsmosaiks mit seinen Daten erfolgt auf unterschiedlichen Informationsebenen in geokodierten Karten. Die Bewertung mikroskopischer Verkehrsprozesse wird durch die besondere Berücksichtigung der Zeitkomponente bei der Visualisierung möglich. Die vorgestellte Verarbeitungskette beinhaltet neue Anwendungsbereiche für geografische Informationssysteme (GIS). Der beschriebene Ansatz wurde konzeptionell bearbeitet, in der Programmiersprache IDL realisiert und erfolgreich getestet. / The growing demand of urban and interregional road traffic requires an improvement regarding the effectiveness of telematics systems. The use of appropriate sensor systems for traffic data acquisition is a decisive prerequisite for the efficiency of traffic control. This thesis focuses on analyzing road traffic based on stationary and distributed ground opto-electronic matrix sensors. A concept which covers all parts from image data acquisition up to traffic data provision is presented. This interdisciplinary approach establishes a basis for the integration of such a sensor system into telematics systems. Unlike conventional ground stationary sensors, the acquisition of traffic data is spread over larger areas in this case. As a result, road specific traffic data can be measured directly. Georeferencing of traffic objects is the basis for optimal road traffic analysis and road traffic control. This approach will demonstrate how to generate a spatial mosaic consisting of traffic data generated by several sensors with different spectral resolution. For traffic flow analysis the realisation of special 4D data visualisation methods on different information levels was an essential need. The data processing chain introduces new areas of application for geographical information systems (GIS). The approach utilised in this study has been worked out conceptually and also successfully tested and applied in the programming language IDL.
97

Filhote - ferramenta de suporte à análise e interpretação de dados biológicos

Trevisan, Daniela Mascarenhas de Queiroz 01 December 2015 (has links)
Este trabalho apresenta uma proposta para a estruturação dos dados de peixes coletados na região da Usina Hidrelétrica Luís Eduardo Magalhães (ou Usina de Lajeado), no período de 1999 a 2012, e o desenvolvimento de uma ferramenta, chamada Filhote, para a administração destes dados. O principal objetivo é oferecer um meio de manipulação e armazenamento eficiente aos dados obtidos possibilitando a construção de séries históricas com a agregação de resultados de futuras coletas. Para isto, foi desenvolvido um modelo de dados para o armazenamento estruturado desta série, visando servir de alicerce aos estudos de monitoramento da fauna de peixes em ambientes com e sem reservatório. Tomando este modelo como base, a ferramenta Filhote foi integrada à aplicação de Mineração de Dados WEKA com o intuito de prover ao pesquisador um meio de análise de dados através da geração de regras de associação. O modelo de dados e a ferramenta desenvolvida são viáveis para o tratamento dos dados existentes e se apresentam como uma boa alternativa para projetos que coletam dados neste mesmo sentido, possibilitando a expansão dos módulos de armazenamento, bem como com a inclusão de novos algoritmos de mineração de dados. / This work presents a proposal to structure the data of fishes collected in the region of hydroelectric plant Luís Eduardo Magalhães (or hydroelectric plant of Lajeado), during the 1999-2012 period and also a tool development, called Filhote, for the administration of these data. The main purpose is to provide an efficient way to manipulate and store the obtained data, enabling the construction of time series aggregating results from future collections. To this intent, it was developed a data model for the structured storage of this set, aiming to provide the basis for studies on the monitoring of fish fauna in environments with and without reservoir. Taking this model as the basis, the Filhote tool has been integrated into the application of Data Mining WEKA in order to provide the researcher a means of data analysis through the generation of association rules. The data model and the developed tool are viable to treatment of existing data and they are presented as a good alternative for projects that collect data in this same direction, enabling the expansion of the storage modules, as well as the inclusion of new data mining algorithms.
98

An Open Data Model for Emulation Models of Industrial Components

Birtic, Martin January 2018 (has links)
Emulation is a technology, historically mostly used for virtual commissioning of automated industrial systems, and operator training. Trends show that new areas for deployment are being investigated. One way to broaden the scope of emulation technology is to increase emulation detail level. The University of Skövde conduct research within emulation technology, and are developing a higher detail level emulation platform performing  on component level. For transparent and systematic development of component models on this level, an open, extensible, and flexible data model for emulation models of industrial components is wanted. This thesis is contributing to this endeavour by developing a first draft of such a data model. A demonstration is also conducted by implementing a few components into the developing emulation environment, using XML as file format. An iterative "design and creation" methodology was used to develop and implement an object oriented data model. A selected set of industrial components were used to develop and demonstrate the data model, and the final result is visually represented as a class diagram together with explanatory documentation. Using the methodology and data modelling strategy used in this thesis, systematic and transparent development of emulation models on component level is possible in an extensible and flexible manner. / Emulering är en teknologi som historiskt mestadels använts vid virtuel idrifttagning av industriella automatiserade system samt vid operatörsträning. Trender visar att nya användningsområden utforskas. Ett sätt att vidga användningsområdet för emulering är att öka dess detaljnivå. Högskolan i Skövde utför forskning inom emulering och utvecklar en emuleringsplattform med utökad detaljnivå, även kallad komponentnivån. För att kunna arbeta systematiskt med utvecklandet av emuleringsmodeller för denna nivå önskas en öppen, skalbar, och flexibel datamodell för emuleringsmodeller. Detta examensarbete bidrar till detta genom att utveckla ett första utkast av en sådan data modell. Datamodellen demonstreras genom implementation inom den utvecklandes emuleringsmiljön, med hjälp av filformatet XML. En iterativ "design and creation" metodologi användes för att utveckla och implementera datamodellen. Ett set av industriella komponenter användes i utvecklingen och implementationen av datamodellen. Projektets resultat presenteras som ett klassdiagram tillsammans med förklarande dokumentation. Används projektes metodologi och datamodellerings-strategi kan man med fördel arbeta transparant och systematiskt med utveckling av emuleringsmodeller för anginven nivå. / TWIN
99

General Insurance Reserve Risk Modeling Based on Unaggregated Data / Modelování rizika rezerv v neživotním pojištění založené na neagregovaných datech

Zimmermann, Pavel January 2004 (has links)
Recently the eld of actuarial mathematics has experienced a large development due to a signi cant increase of demands for insurance and nancial risk quanti cation due to the fact that the implementation of a complex of rules of international reporting standards (IFRS) and solvency reporting (Solvency II) has started. It appears that the key question for solvency measuring is determination of probability distribution of future cash ows of an insurance company. Solvency is then reported through an appropriate risk measure based e.g. on a percentile of this distribution. While as present popular models are based solely on aggregated data (such as total loss development from a certain time period), the main objective of this work is to scrutinize possibilities of modelling of the reserve risk (i.e. roughly said, the distribution of the ultimate incurred value of claims that have already happened in the past) based directly on individual claims. These models have not yet become popular and to the author's knowledge an overview of such models has not been published previously. The assumptions and speci cation of the already published models were compared to the practical experience and some inadequacies were pointed out. Further more a new reserve risk model was constructed which is believed to have practically more suitable assumptions and properties than the existing models. Theoretical aspects of the new model were studied and distribution of the ultimate incurred value (the modelled variable) was derived. An emphasis was put also on practical aspects of the developed model and its applicability in the case of industrial use. Therefore some restrictive assumptions which might be considered realistic in variety of practical cases and which lead to a signi cant simpli cation of the model were identi ed throughout the work. Furthermore, algorithms to reduce the number of the necessary calculations were developed. In the last chapters of the work, an e ort was devoted to the methods of the estimation of the considered parameters respecting practical limitations (such as missing observations at the time of modelling). For this purpose, survival analysis was (amongst other methods) applied.
100

國際物流業導入跨組織作業基礎成本制之個案研究-以某國際物流公司為例

林輝倫 Unknown Date (has links)
過去,ABC著重於個別企業內資源之分配,隨著供應鏈及跨組織資訊流程受到重視,ABC的觀念將隨之延伸應用於跨組織作業中,如整體物流過程的成本計算、支援定價與產品組合等決策,並可幫助各物流公司之協調。   本研究以資訊管理的概念,研究跨組織國際物流ABC管理制度系統的規劃與設計,並以國際物流個案說明,對個案公司進行訪談,分析作業流程,並依據系統分析方法與步驟,建立企業模型並加以分析與設計,建立國際物流之作業基礎成本制資料模型架構。希望經由國際物流個案的研究來探討作業基礎成本制如何運用資訊系統的幫助,取得優勢。   目前第四方物流正在萌芽發展階段,且大多數的業者對第四方物流還不是非常清楚,但隨著物流服務需更深層次、更全面的提高服務品質,第四方物流將是未來發展的潮流,若再由各公司單打獨鬥,將不符合未來潮流。而跨組織的作業基礎成本制將提供跨組織相關成本與作業活動資訊,以期能掌握跨組織成本與作業資訊,取得跨組織之競爭優勢。

Page generated in 0.2631 seconds