• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 12
  • 10
  • 10
  • 8
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 319
  • 87
  • 49
  • 42
  • 37
  • 36
  • 34
  • 32
  • 30
  • 27
  • 25
  • 22
  • 22
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Synthesis of software architectures for systems-of-systems: an automated method by constraint solving / Síntese de arquiteturas de software para sistemas-de-sistemas: um método automatizado por resolução de restrições

Milena Guessi Margarido 27 September 2017 (has links)
Systems-of-Systems (SoS) encompass diverse and independent systems that must cooperate with each other for performing a combined action that is greater than their individual capabilities. In parallel, architecture descriptions, which are the main artifact expressing software architectures, play an important role in fostering interoperability among constituents by facilitating the communication among stakeholders and supporting the inspection and analysis of the SoS from an early stage of its life cycle. The main problem addressed in this thesis is the lack of adequate architectural descriptions for SoS that are often built without an adequate care to their software architecture. Since constituent systems are, in general, not known at design-time due to the evolving nature of SoS, the architecture description must specify at design-time which coalitions among constituent systems are feasible at run-time. Moreover, as many SoS are being developed for safety-critical domains, additional measures must be placed to ensure the correctness and completeness of architecture descriptions. To address this problem, this doctoral project employs SoSADL, a formal language tailored for the description of SoS that enables one to express software architectures as dynamic associations between independent constituent systems whose interactions are mediated for accomplishing a combined action. To synthesize concrete architectures that adhere to one such description, this thesis develops a formal method, named Ark, that systematizes the steps for producing such artifacts. The method creates an intermediate formal model, named TASoS, which expresses the SoS architecture in terms of a constraint satisfaction problem that can be automatically analyzed for an initial set of properties. The feedback obtained in this analysis can be used for subsequent refinements or revisions of the architecture description. A software tool named SoSy was also developed to support the Ark method as it automates the generation of intermediate models and concrete architectures, thus concealing the use of constraint solvers during SoS design and development. The method and its accompanying tool were applied to model a SoS for urban river monitoring in which the feasibility of candidate abstract architectures is investigated. By formalizing and automating the required steps for SoS architectural synthesis, Ark contributes for adopting formal methods in the design of SoS architectures, which is a necessary step for obtaining higher reliability levels. / Sistemas-de-sistemas (SoS) englobam sistemas diversos e independentes que cooperam entre si para executar uma ação combinada que supera suas competências individuais. Em paralelo, descrições arquiteturais são artefatos que expressam arquiteturas de software, desempenhando no contexto de SoS um importante papel na promoção da interoperabilidade entre constituintes ao facilitar a comunicação entre interessados e apoiar atividades de inspeção e análise desde o início de seu ciclo de vida. O principal problema abordado nessa tese consiste na falta de descrições arquiteturais adequadas para SoS que estão sendo desenvolvidos sem um devido cuidado à sua arquitetura de software. Uma vez que os sistemas constituintes não são necessariamente conhecidos em tempo de projeto devido à natureza evolucionária dos SoS, a descrição arquitetural precisa definir em tempo de projeto quais coalisões entre sistemas constituintes são possíveis em tempo de execução. Como muitos desses sistemas são desenvolvidos para o domínio crítico de segurança, medidas adicionais precisam ser adotadas para garantir a correção e completude da descrição arquitetural. Visando tratar esse problema, esse projeto de doutorado emprega SosADL, uma linguagem formal criada especialmente para o domínio de SoS que permite expressar arquiteturas de software como associações dinâmicas entre sistemas independentes em que as interações devem ser mediadas para desempenhar uma ação conjunta. Em particular, é proposto um novo método formal, denominado Ark, para sistematizar os passos necessários na síntese de arquiteturas concretas aderentes a essa descrição. Para isso, o método cria um modelo formal intermediário, denominado TASoS, que expressa a arquitetura do SoS em termos de um problema de satisfatibilidade de restrições, possibilitando desse modo a verificação automática de um conjunto inicial de propriedades. O resultado obtido por essa análise pode ser utilizado em refinamentos e revisões subsequentes da descrição arquitetural. Uma ferramenta de apoio denominada SoSy também foi desenvolvida para automatizar a geração de modelos intermediários e arquiteturas concretas, ocultando o uso de solucionadores de restrições no projeto e desenvolvimento de SoS. O método e sua ferramenta foram aplicados em um modelo de SoS para monitoramento de rios em áreas urbanas em que a viabilidade de arquiteturas abstratas foi investigada. Ao formalizar e automatizar os passos necessários para a síntese arquitetural de SoS, é possível adotar métodos formais no projeto arquitetural de SoS, que são necessários para alcançar níveis maiores de confiabilidade.
192

Comments on the cybernetics of stability and regulation in social systems

Ben-Eli, M. U. January 1976 (has links)
The methods and principles of cybernetics are applied to a discussion of stability and regulation in social systems taking a global viewpoint. The fundamental but still classical notion of stability as applied to homeostatic and ultrastable systems is discussed, with a particular reference to a specific well-studied example of a closed social group (the Tsembaga studied by Roy Rappaport in New Guinea). The discussion extends to the problem of evolution in large systems and the question of regulating evolution is addressed without special qualifications. A more comprehensive idea of stability is introduced as the argument turns to the problem of evolution for viability in general. Concepts pertaining to the problem of evolution are exemplified by a computer simulation model of an abstractly defined ecosystem in which various dynamic processes occur allowing the study of adaptive and evolutionary behaviour. In particular, the role of coalition formation and cooperative behaviour is stressed as a key factor in the evolution of complexity. The model consists of a population of several species of dimensionless automata inhabiting a geometrically defined environment in which a commodity essential for metabolic requirements (food) appears. Automata can sense properties of their environment, move about it, compete for food, reproduce or combine into coalitions thus forming new and more complex species. Each species is associated with a specific genotype from which the species’ behavioural characteristics (its phenotype) are derived. Complexity and survival efficiency of species increases through coalition formation, an event which occurs when automata are faced with an “undecidable” situation that is resolvable only by forming a new and more complex organization. Exogenous manipulation of the food distribution pattern and other critical factors produces different environmental conditions resulting in different behaviour patterns of automata and in different evolutionary “pathways.” Eve-1, the computer program developed to implement this model, accepts a high-level command language which allows for the setting of parameters, definition of initial configurations, and control of output formats. Results of simulation are produced graphically and include various pertinent tables. The program was given a modular hierarchical structure which allows easy generation of new versions incorporating different sets of rules. The model strives to capture the essence of the evolution of complexity viewed as a general process rather than to describe the evolution of a particular “real” system. In this respect it is not context-specific, and the behaviours which are observable in different runs can receive various interpretation depending on specific identifications. Of these, biological, ecological, and sociological interpretations are the most obvious and the latter, in particular, is stressed.
193

Synthèse d’architectures logicielles pour systèmes-de-systèmes : une méthode automatisée par résolution de contraintes / Synthesis of software architectures for systems-of-systems : an automated method by constraint solving

Margarido, Milena 27 September 2017 (has links)
Les systèmes-de-systèmes (Systems-of-Systems, SoS) interconnectent plusieurs systèmes indépendants qui travaillent ensemble pour exécuter une action conjointe dépassant leurs compétences individuelles. Par ailleurs, les descriptions architecturales sont des artefacts qui décrivent des architectures logicielles jouant dans le contexte SoS un rôle important dans la promotion de l’interaction des éléments constituants tout en favorisant la communication parmi les intéressés et en soutenant les activités d’inspection et d’analyse dès le début de leur cycle de vie. Le principal problème traité dans cette thèse est le manque de descriptions architecturales adéquates pour les SoS qui sont développés sans l’attention nécessaire à leur architecture logicielle. Puisque les systèmes constituants ne sont pas forcément connus pendant la conception du projet à cause du développement évolutionnaire des SoS, la description architecturale doit définir à la conception même du projet quelles coalitions entre les systèmes constituants seront possibles pendant son exécution. En outre, comme plusieurs de ces systèmes sont développés pour le domaine critique de sécurité, des mesures supplémentaires doivent être mises en place pour garantir l’exactitude et la complétude de la description architecturale. Afin de résoudre ce problème, nous nous servons du SosADL, un langage formel créé spécialement pour le domaine SoS et qui permet de décrire les architectures logicielles comme des associations dynamiques entre systèmes indépendants où les interactions doivent être coordonnées pour réaliser une action combinée. Notamment, une nouvelle méthode formelle, nommée Ark, est proposée pour systématiser les étapes nécessaires dans la synthèse d’architectures concrètes obéissant à cette description. Dans ce dessein, cette méthode crée un modèle formel intermédiaire, nommé TASoS, qui décrit l’architecture du SoS en tant que problème de satisfaisabilité de restrictions, rendant ainsi possible la vérification automatique d’un ensemble initial de propriétés. Le résultat obtenu par cette analyse peut s’utiliser en raffinements et révisions ultérieurs de la description architecturale. Un outil logiciel nommé SoSy a été aussi développé pour automatiser la génération de modèles intermédiaires et d’architectures concrètes, en cachant l’utilisation de solveurs de contraintes dans le projet de SoS. Particulièrement, cet outil intègre un environnement de développement plus important et complet pour le projet de SoS. Cette méthode et son outil ont été appliqués dans un modèle de SoS de surveillance de rivières urbaines où la faisabilité d’architectures abstraites a été étudiée. En formalisant et en automatisant les étapes requises pour la synthèse architecturale de SoS, Ark contribue à l’adoption de méthodes formelles dans le projet d’architectures SoS, ce qui est nécessaire pour atteindre des niveaux plus élevés de fiabilité. / Systems-of-Systems (SoS) encompass diverse and independent systems that must cooperate with each other for performing a combined action that is greater than their individual capabilities. In parallel, architecture descriptions, which are the main artifact expressing software architectures, play an important role in fostering interoperability among constituents by facilitating the communication among stakeholders and supporting the inspection and analysis of the SoS from an early stage of its life cycle. The main problem addressed in this thesis is the lack of adequate architectural descriptions for SoS that are often built without an adequate care to their software architecture. Since constituent systems are, in general, not known at design-time due to the evolving nature of SoS, the architecture description must specify at design-time which coalitions among constituent systems are feasible at run-time. Moreover, as many SoS are being developed for safety-critical domains, additional measures must be placed to ensure the correctness and completeness of architecture descriptions. To address this problem, this doctoral project employs SoSADL, a formal language tailored for the description of SoS that enables one to express software architectures as dynamic associations between independent constituent systems whose interactions are mediated for accomplishing a combined action. To synthesize concrete architectures that adhere to one such description, this thesis develops a formal method, named Ark, that systematizes the steps for producing such artifacts. The method creates an intermediate formal model, named TASoS, which expresses the SoS architecture in terms of a constraint satisfaction problem that can be automatically analyzed for an initial set of properties. The feedback obtained in this analysis can be used for subsequent refinements or revisions of the architecture description. A software tool named SoSy was also developed to support the Ark method as it automates the generation of intermediate models and concrete architectures, thus concealing the use of constraint solvers during SoS design and development. The method and its accompanying tool were applied to model a SoS for urban river monitoring in which the feasibility of candidate abstract architectures is investigated. By formalizing and automating the required steps for SoS architectural synthesis, Ark contributes for adopting formal methods in the design of SoS architectures, which is a necessary step for obtaining higher reliability levels.
194

Principy maxima pro nelineární systémy eliptických parciálních diferenciálních rovnic / Maximum principles for elliptic systems of partial differential equations

Bílý, Michael January 2017 (has links)
We consider nonlinear elliptic Bellman systems which arise in the theory of stochastic differential games. The right hand sides of the equations (which are called Hamiltonians) may have quadratic growth with respect to the gradient of the unknowns. Under certain assumptions on Lagrangians (from which the Hamiltonians are derived), that are satisfied for many types of stochastic games, we establish the existence and uniqueness of a Nash point and develop structural conditions on the Hamiltonians. From these conditions we establish a certain version of maximum and minimum principle. This result is then used to establish the existence of a bound solution. 1
195

An investigation into growing correlation lengths in glassy systems

Fullerton, Christopher James January 2011 (has links)
In this thesis Moore and Yeo's proposed mapping of the structural glass to the Ising spin glass in a random field is presented. In contrast to Random First Order Theory and Mode Coupling Theory, this mapping predicts that there should be no glass transition at finite temperature. However, a growing correlation length is predicted from the size of rearranging regions in the supercooled liquid, and from this a growing structural relaxation time is predicted. Also presented is a study of the propensity of binary fluids (i.e. fluids containing particles of two sizes) to phase separate into regions dominated by one type of particle only. Binary fluids like this are commonly used as model glass formers and the study shows that this phase separation behaviour is something that must be taken into account.The mapping relies on the use of replica theory and is therefore very opaque. Here a model is presented that may be mapped directly to a system of spins, and also prevents the process of phase separation from occurring in binary fluids. The system of spins produced in the mapping is then analysed through the use of an effective Hamiltonian, which is in the universality class of the Ising spin glass in a random field. The behaviour of the correlation length depends on the spin-spin coupling J and the strength of the random field h. The variation of these with packing fraction and temperature T is studied for a simple model, and the results extended to the full system. Finally a prediction is made for the critical exponents governing the correlation length and structural relaxation time.
196

A process study of enterprise systems implementation in higher education institutions in Malaysia

Ahmad, Abdul Aziz bin January 2011 (has links)
The implementation of information technology and its impact on organisational change has been an important phenomenon, discussed in the IS literature over the last 30 years. Treating information system (IS) implementation as organisational change is a complex phenomenon. This complexity is mainly due to its multidisciplinary, socio-technical, dynamic and non-linear nature. This challenging nature of IS implementation complexities has a direct relationship to the IS implementation project outcomes - its success or failure. In view of this complexity, this research aims to understand how process studies can improve the understanding of enterprise system implementation. We argue that the socio-technical nature of IS development is inevitable thus the only way to go forward is to explore and understand the phenomenon. Following this, we adopt the stakeholder's perspective solely for the purpose of identification of stakeholders and their embedded interests and expectations. While prior research concentrated on a limited number of stakeholders of IS, we attempt to adopt Pouloudi et al. (2004) in mobilizing a stakeholder perspective to incorporate non-human stakeholders within the analysis. Within the actor-network perspective, complexity is resolved through simplification (black-boxing) - unpacking or collapsing the complexity. However, during this simplification process, the risk of removing useful description of the phenomenon through labelling was avoided. To support this research, the punctuated socio-technical information systems change (PSIC) model was applied. In this model, interactions and relationships between its components (antecedent condition, process, outcomes and organisational context) play a vital role. This research focuses on the implementation of an integrated financial system in three Malaysian universities through three interpretive case studies. Our findings show that each of our case studies provides a unique IS development trajectory. Following stakeholder analysis, the different cases provide interesting combinations of conflicts and coalitions among human and non-human stakeholders which further dictates the project outcomes or the process of IS black-boxing. The relationship between the three case studies on the other hand provides an interesting illustration of IS technology transfer.
197

Analytische Bestimmung einer Datenallokation für Parallele Data Warehouses

Stöhr, Thomas 16 October 2018 (has links)
Die stark wachsende Bedeutung der Analyse von Data Warehouse-Inhalten und bequemere Anfrageschnittstellen für Endbenutzer erhöhen das Aufkommen an OLAP-Queries signifikant. Bei der Reduktion des Arbeitsumfanges und dem Erreichen kurzer Antwortzeiten für diese komplexen Anfragen ist neben der Nutzung von Verarbeitungs- und I/O-Parallelität eine adäquate Datenallokation der Schlüssel zu guter Leistungsfähigkeit. Allerdings ist die Bestimmung einer geeigneten Fragmentierung und Allokation für große Datenmengen, wie sie z.B. in Form von Faktentabellen oder Indexstrukturen in relationalen Sternschemas vorliegen, ein schwieriges Problem. Hierfür existiert heutzutage praktisch keine Werkzeugunterstützung. Wir präsentieren daher einen Ansatz zur analytischen Bestimmung einer passenden multi-dimensionalen, hierarchischen Datenallokation. Unser Ansatz dürfte recht einfach in ein Werkzeug zur automatischen Unterstützung des Allokationsproblems integriert werden können.
198

Research In High Performance And Low Power Computer Systems For Data-intensive Environment

Shang, Pengju 01 January 2011 (has links)
According to the data affinity, DAFA re-organizes data to maximize the parallelism of the affinitive data, and also subjective to the overall load balance. This enables DAFA to realize the maximum number of map tasks with data-locality. Besides the system performance, power consumption is another important concern of current computer systems. In the U.S. alone, the energy used by servers which could be saved comes to 3.17 million tons of carbon dioxide, or 580,678 cars {Kar09}. However, the goals of high performance and low energy consumption are at odds with each other. An ideal power management strategy should be able to dynamically respond to the change (either linear or nonlinear, or non-model) of workloads and system configuration without violating the performance requirement. We propose a novel power management scheme called MAR (modeless, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption under performance constraints. By using richer feedback factors, e.g. the I/O wait, MAR is able to accurately describe the relationships among core frequencies, performance and power consumption. We adopt a modeless control model to reduce the complexity of system modeling. MAR is designed for CMP (Chip Multi Processor) systems by employing multi-input/multi-output (MIMO) theory and per-core level DVFS (Dynamic Voltage and Frequency Scaling).; TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content as well as the log flushing overhead, thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees comparable RAID reliability, the same recovery correctness and ACID semantics of traditional transactional processing systems. On the other hand, the emerging myriad data intensive applications place a demand for high-performance computing resources with massive storage. Academia and industry pioneers have been developing big data parallel computing frameworks and large-scale distributed file systems (DFS) widely used to facilitate the high-performance runs of data-intensive applications, such as bio-informatics {Sch09}, astronomy {RSG10}, and high-energy physics {LGC06}. Our recent work {SMW10} reported that data distribution in DFS can significantly affect the efficiency of data processing and hence the overall application performance. This is especially true for those with sophisticated access patterns. For example, Yahoo's Hadoop {refg} clusters employs a random data placement strategy for load balance and simplicity {reff}. This allows the MapReduce {DG08} programs to access all the data (without or not distinguishing interest locality) at full parallelism. Our work focuses on Hadoop systems. We observed that the data distribution is one of the most important factors that affect the parallel programming performance. However, the default Hadoop adopts random data distribution strategy, which does not consider the data semantics, specifically, data affinity. We propose a Data-Affinity-Aware (DAFA) data placement scheme to address the above problem. DAFA builds a history data access graph to exploit the data affinity.; The evolution of computer science and engineering is always motivated by the requirements for better performance, power efficiency, security, user interface (UI), etc {CM02}. The first two factors are potential tradeoffs: better performance usually requires better hardware, e.g., the CPUs with larger number of transistors, the disks with higher rotation speed; however, the increasing number of transistors on the single die or chip reveals super-linear growth in CPU power consumption {FAA08a}, and the change in disk rotation speed has a quadratic effect on disk power consumption {GSK03}. We propose three new systematic approaches as shown in Figure 1.1, Transactional RAID, data-affinity-aware data placement DAFA and Modeless power management, to tackle the performance problem in Database systems, large scale clusters or cloud platforms, and the power management problem in Chip Multi Processors, respectively. The first design, Transactional RAID (TRAID), is motivated by the fact that in recent years, more storage system applications have employed transaction processing techniques Figure 1.1 Research Work Overview] to ensure data integrity and consistency. In transaction processing systems(TPS), log is a kind of redundancy to ensure transaction ACID (atomicity, consistency, isolation, durability) properties and data recoverability. Furthermore, high reliable storage systems, such as redundant array of inexpensive disks (RAID), are widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. However, the Databases and storage systems tend to implement their independent fault tolerant mechanisms {GR93, Tho05} from their own perspectives and thereby leading to potential high overhead. We observe the overlapped redundancies between the TPS and RAID systems, and propose a novel reliable storage architecture called Transactional RAID (TRAID).
199

A Systems Approach to the Development of Enhanced Learning for Engineering Systems Design Analysis

Henshall, Edwin, Campean, Felician, Rutter, B. 09 May 2017 (has links)
yes / This paper considers the importance of applying sound instructional systems design to the development of a learning intervention aimed at developing skills for the effective deployment of an enhanced methodology for engineering systems design analysis within a Product Development context. The leading features of the learning intervention are summarised including the content and design of a training course for senior engineering management which is central to the intervention. The importance of promoting behavioural change by fostering meaningful learning as a collaborative process is discussed. Comparison is made between the instructional design of the corporate learning intervention being developed and the systems engineering based product design process which is the subject of the intervention.
200

Developing distributed applications with distributed heterogenous databases

Dixon, Eric Richard 19 May 2010 (has links)
This report identifies how Tuxedo fits into the scheme of distributed database processing. Tuxedo is an On-Line Transaction Processing (OLTP) system. Tuxedo was studied because it is the oldest and most widely used transaction processing system on UNIX. That means that it is established, extensively tested, and has the most tools available to extend its capabilities. The disadvantage of Tuxedo is that newer UNIX OLTP systems are often based on more advanced technology. For this reason, other OLTPs were examined to compare their additional capabilities with those offered by Tuxedo. As discussed in Sections I and II, Tuxedo is modeled according to the X/Open's Distributed Transaction Processing (DTP) model. The DTP model includes three pieces: Application Programs (APs), Transaction Monitors (TMs), and Resource Managers (RMs). Tuxedo provides a TM in the model and uses the XA specification to communicate with RMs (e.g. Informix). Tuxedo's TX specification, which defines communications between the APs and TMs is also being considered by X/Open as the standard interface between APs and TMs. There is currently no standard interface between those two pieces. Tuxedo conforms to all X/Open's current standards related to the model. Like the other major OLTPs for UNIX, Tuxedo is based on the client/server model. Tuxedo expands that support to include both synchronous and asynchronous service calls. Tuxedo calls that extension the enhanced client/server model. Tuxedo also expands their OLTP support to allow distributed transactions to include databases on IBM compatible Personal Computers (PCs) and proprietary mainframe (Host) systems. Tuxedo calls this extension Enterprise Transaction Processing (ETP). The name enterprise comes from the fact that since Tuxedo supports database transactions supporting UNIX, PCs. and Host computers, transactions can span the computer systems of entire businesses, or enterprises. Tuxedo is not as robust as the distributed database system model presented by Date. Tuxedo requires programmer participation in providing the capabilities that Date says the distributed database manager should provide. The coordinating process is the process which is coordinating a global transaction. According to Date's model, agents exist on remote sites participating in the transaction in order to handle the calls to the local resource manager. In Tuxedo, the programmer must provide that agent code in the form of services. Tuxedo does provide location transparency, but not in the form Date describes. Date describes location transparency as controlled by a global catalog. In Tuxedo, location transparency is provided by the location of servers as specified in the Tuxedo configuration file. Tuxedo also does not provide replication transparency as specified by Date. In Tuxedo, the programmer must write services which maintain replicated records. Date also describes five problems faced by distributed database managers. The first problem is query processing. Tuxedo provides capabilities to fetch records from databases, but does not provide the capabilities to do joins across distributed databases. The second problem is update propagation. Tuxedo does not provide for replication transparency. Tuxedo does provide enough capabilities for programmers to reliably maintain replicated records. The third problem is concurrency control, which is supported by Tuxedo. The fourth problem is the commit protocol. Tuxedo's commit protocol is the two-phase commit protocol. The fifth problem is the global catalog. Tuxedo does not have a global catalog. The other comparison presented in the paper was between Tuxedo and the other major UNIX OL TPs: Transarc's Encina, Top End, and CICS. Tuxedo is the oldest and has the largest market share. This gives 38 Tuxedo the advantage of being the most thoroughly tested and the most stable. Tuxedo also has the most tools available to extend its capabilities. The disadvantage Tuxedo has is that since it is the oldest, it is based on the oldest technology. Transarc's Encina is the most advanced UNIX OLTP. Encina is based on DCB and supports multithreading. However, Encina has been slow to market and has had stability problems because of its advanced features. Also, since Encina is based on DCB, its success is tied to the success of DCB. Top End is less advanced than Encina, but more advanced than Tuxedo. It is also much more stable than Encina. However. Top End is only now being ported from the NCR machines on which it was originally built. CICS is not yet commercially available. CICS is good for companies with CICS code to port to UNIX and CICS programmers who are already experts. The disadvantage to CICS is that companies which work with UNIX already and do not use CICS will find the interface less natural than Tuxedo, which originated under UNIX. / Master of Science

Page generated in 0.0751 seconds