• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 88
  • 54
  • 31
  • 23
  • 20
  • 20
  • 18
  • 17
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
72

The Role of Workstation-Based Client/Server Systems in Changing Business Processes: a Multiple Case Study

Nik Hassan, Nik Rushdi 12 1900 (has links)
Although several studies question information technology's contribution to productivity, organizations continue to invest in client/server systems (CSSs) particularly as enablers of business process reengineering (BPR). These efforts may be wasted if they do not improve business processes. This study focused on business processes and investigated the role of workstation-based CSSs in changing business processes. A multiple case study of workstation-based CSS databases in three organizations was performed with the proposition that they moderate the relation between managerial action and changes within business processes. The research framework suggested that changes to business processes are achieved by reducing uncertainty. In order to measure change in business processes, this study categorized business process change into: (1) compressing sequential tasks across functions, (2) compressing tasks vertically within the managerial hierarchy, (3) eliminating slack resources, (4) reducing the distance between the point of decision and the point of information or eliminating intermediaries, (5) reconfiguring sequential processes to operate in parallel, and (6) linking parallel activities during the process. Data collected from questionnaires, interviews, and observations from three case studies were used to construct network diagrams, relationship matrices, reachability matrices, and task tables of business processes. The results of this research partially support the proposition that managerial action affects business process change by reducing uncertainty. This research suggests that changes in the use of workstation-based CSSs are related to changes in business processes. However, because ofthe small sample size, no finding was made regarding changes in the strength of that relationship. Therefore, within its limitations, this research (1) partially supports the proposition that CSSs moderate changes in business processes, (2) found that both favorable and unfavorable changes may result from using CSSs, (3) explains how business process change occurs, and (4) suggests new variables for measuring successful BPR.
73

Compliance Issues In Cloud Computing Systems

Unknown Date (has links)
Appealing features of cloud services such as elasticity, scalability, universal access, low entry cost, and flexible billing motivate consumers to migrate their core businesses into the cloud. However, there are challenges about security, privacy, and compliance. Building compliant systems is difficult because of the complex nature of regulations and cloud systems. In addition, the lack of complete, precise, vendor neutral, and platform independent software architectures makes compliance even harder. We have attempted to make regulations clearer and more precise with patterns and reference architectures (RAs). We have analyzed regulation policies, identified overlaps, and abstracted them as patterns to build compliant RAs. RAs should be complete, precise, abstract, vendor neutral, platform independent, and with no implementation details; however, their levels of detail and abstraction are still debatable and there is no commonly accepted definition about what an RA should contain. Existing approaches to build RAs lack structured templates and systematic procedures. In addition, most approaches do not take full advantage of patterns and best practices that promote architectural quality. We have developed a five-step approach by analyzing features from available approaches but refined and combined them in a new way. We consider an RA as a big compound pattern that can improve the quality of the concrete architectures derived from it and from which we can derive more specialized RAs for cloud systems. We have built an RA for HIPAA, a compliance RA (CRA), and a specialized compliance and security RA (CSRA) for cloud systems. These RAs take advantage of patterns and best practices that promote software quality. We evaluated the architecture by creating profiles. The proposed approach can be used to build RAs from scratch or to build new RAs by abstracting real RAs for a given context. We have also described an RA itself as a compound pattern by using a modified POSA template. Finally, we have built a concrete deployment and availability architecture derived from CSRA that can be used as a foundation to build compliance systems in the cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
74

Das MTA-Protokoll: Ein transaktionsorientiertes Managementprotokoll auf Basis von SNMP

Mandl, Peter 30 January 2013 (has links) (PDF)
Die sich in den letzten Jahren abzeichnende Verteilung von Anwendungssystemen auf viele Rechner in Netzwerken, die sich in der Praxis unter dem Begriff Client/Server-Computing etabliert hat, brachte zwangsläufig eine Lücke im Management dieser Systeme mit sich. Es wurde schnell deutlich, daß die Verteilung der Systeme wesentlich kompliziertere Techniken benötigt, um sie zu administrieren, als man dies von zentralistischen Systemen her kannte. Anstrengungen von Standardisierungsgremien und Herstellervereinigungen führten zwar zu einer gewissen Administrierbarkeit, diese beschränkt sich aber derzeit noch weitgehend auf die beteiligten Knoten und auf die Netzkomponenten. Anwendungen, für die ja letztendlich die Rechner eingesetzt werden, sind bisher nur rudimentär in die heute vorliegenden und in der Praxis eingesetzten Managementstandards integriert. Die Anzahl der zu managenden Objekte innerhalb der Anwendungen wird aber immer größer und die Komplexität der Beziehungen unter den Objekten steigt immer mehr an. Diese Komplexität erfordert fehlertolerante Mechanismen in den Managementsystemen, über die Anwendungen administriert werden. Dieser Beitrag befaßt sich mit Mechanismen zum transaktionsgesicherten Management, wobei das Anwendungsmanagement im Vordergrund steht. Transaktionskonzepte, die vorwiegend im Datenbankbereich entwickelt wurden, werden auf die Verwendbarkeit im Management verteilter Anwendungen hin untersucht. Es wird ein neues Protokoll (Management-Transaktions-Protokoll, kurz MTAProtokoll) als Erweiterung zu SNMP vorgestellt, das die Abwicklung von verteilten Transaktionen auf Managementobjekte ermöglicht.
75

An agent-based peer-to-peer grid computing architecture

Tang, Jia. January 2005 (has links)
Thesis (Ph.D.)--University of Wollongong, 2005. / Typescript. Includes bibliographical references: leaf 88-95.
76

Disconnected operation in a distributed file system /

Kistler, James Jay, January 1900 (has links)
Based on the author's thesis (Ph.D.), 1993. / Includes bibliographical references ([239]-244) and index. Also issued online.
77

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
78

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
79

Das MTA-Protokoll: Ein transaktionsorientiertes Managementprotokoll auf Basis von SNMP

Mandl, Peter 30 January 2013 (has links)
Die sich in den letzten Jahren abzeichnende Verteilung von Anwendungssystemen auf viele Rechner in Netzwerken, die sich in der Praxis unter dem Begriff Client/Server-Computing etabliert hat, brachte zwangsläufig eine Lücke im Management dieser Systeme mit sich. Es wurde schnell deutlich, daß die Verteilung der Systeme wesentlich kompliziertere Techniken benötigt, um sie zu administrieren, als man dies von zentralistischen Systemen her kannte. Anstrengungen von Standardisierungsgremien und Herstellervereinigungen führten zwar zu einer gewissen Administrierbarkeit, diese beschränkt sich aber derzeit noch weitgehend auf die beteiligten Knoten und auf die Netzkomponenten. Anwendungen, für die ja letztendlich die Rechner eingesetzt werden, sind bisher nur rudimentär in die heute vorliegenden und in der Praxis eingesetzten Managementstandards integriert. Die Anzahl der zu managenden Objekte innerhalb der Anwendungen wird aber immer größer und die Komplexität der Beziehungen unter den Objekten steigt immer mehr an. Diese Komplexität erfordert fehlertolerante Mechanismen in den Managementsystemen, über die Anwendungen administriert werden. Dieser Beitrag befaßt sich mit Mechanismen zum transaktionsgesicherten Management, wobei das Anwendungsmanagement im Vordergrund steht. Transaktionskonzepte, die vorwiegend im Datenbankbereich entwickelt wurden, werden auf die Verwendbarkeit im Management verteilter Anwendungen hin untersucht. Es wird ein neues Protokoll (Management-Transaktions-Protokoll, kurz MTAProtokoll) als Erweiterung zu SNMP vorgestellt, das die Abwicklung von verteilten Transaktionen auf Managementobjekte ermöglicht.:1 Einleitung und Motivation S. 4 2 Anwendungsmanagement S. 6 2.1 Begriffsbestimmung S. 6 2.2 Anwendungsmanagement im SNMP-Modell S. 7 3 Allgemeine Transaktionskonzepte S. 8 3.1 ACID-Transaktionen S. 8 3.2 Concurrency Control S. 9 3.3 Logging und Recovery S. 11 4 ACID-Eigenschaften von Managementtransaktionen S. 14 5 MTA-Protokoll S. 17 5.1 Modell S. 17 5.2 Dienste und Protokolle S. 18 5.3 Concurrency Control S. 21 5.4 Logging und Recovery S. 21 5.5 Prinzipieller Protokollablauf S. 22 5.6 Nachrichteneinheiten S. 27 5.7 Protokoll-Timer S. 28 5.8 MTA-Zustandsautomaten S. 29 5.9 Logginginformation S. 32 5.10 Fehlersituationen und Wiederanlauf S. 34 5.10.1 Ausfall eines Managers S. 34 5.10.2 Ausfall eines Objektservers S. 36 5.10.3 Ausfall einer MTA-Instanz S. 38 5.10.4 Knotenausfall und Kommunikationsunterbrechung S.40 5.11 Recovery-Algorithmus S. 42 5.12 SDL-Spezifikation S. 44 6 Verwandte Arbeiten S. 46 7 Zusammenfassung und Ausblick S. 47 8 Anhang S. 50 8.1 SDL-Diagramm für Monitorprozeß S. 50 8.2 SDL-Diagramme für MTA-Koordinatorprozeß S. 51 8.2.1 SDL-Diagramm für Zustand Wait S. 51 8.2.2 SDL-Diagramm für Zustand Initiated S.52 8.2.3 SDL-Diagramm für Zustand WaitResp S. 53 8.2.4 SDL-Diagramm für Zustand Active S. 54 8.2.5 SDL-Diagramm für Zustand Preparing S. 55 8.2.6 SDL-Diagramm für Zustand Committing S. 57 8.2.7 SDL-Diagramm für Rollback S. 58
80

Implementation business-to-business electronic commerce website using active server pages

Teesri, Sumuscha 01 January 2000 (has links)
E-commerce is the current approach for doing any type of business online, which uses the superior power of digital information to understand the requirements and preferences of each client and each partner, to adapt products and services for them, and then to distribute the products and services as swiftly as possible.

Page generated in 0.0866 seconds