Spelling suggestions: "subject:"core""
621 |
An ontology-based reengineering methodology for service orientationZhang, Zhuopeng January 2009 (has links)
The “Software as a Service” model in service-oriented computing allows loosely coupled software components to be designed and integrated with other software systems. Web services, together with service-oriented architectures, are promising integration technology to facilitate legacy system Webification. However, since most components in a legacy system were not designed and developed as services, the current software systems need to be converted into a set of loosely coupled services. Therefore, a service-oriented software reengineering process is essential for legacy systems to survive in the service-oriented computing environment. In this service-oriented software reengineering process, understanding, decomposing and reusing legacy code turn to be important activities. In this thesis, a methodology for Service-Oriented Software Reengineering (SOSR) is proposed to support the identification, extraction and integration of reusable legacy code. According to both the result of legacy system assessment and a service-oriented analysis and design process, a reengineering decision is made by proposed rules. Based on the service-oriented software reengineering decision, ontologies for SOSR, which consists of Domain Concept Ontology (DCO), Functionality Ontology (FO) and Software Component Ontology (SCO), are developed by the ontology development methodologies. These ontologies store knowledge on both application domain and code entities, which support further legacy code analysis. The identification of service candidates in legacy systems is achieved by mapping FO and SCO via a novel method combining Formal Concept Analysis (FCA) and Relational Concept Analysis (RCA). After the service candidates are identified, the reusable legacy code is extracted by dependency analysis and program slicing. Some rules are defined in code query language for the detection of dead code. Program slicing techniques are applied as main reverse engineering techniques to recover executable legacy code. An Executable Union Slicing (EUS) algorithm is defined to generate executable legacy components with high cohesion and low coupling properties. In the integration phase, extracted legacy components with core legacy code can either be wrapped into Web services for the service orchestration in the business layer, or be composed in a software service provider. The proposed SOSR methodology is proved flexible and practical to migrate legacy applications to service-oriented architectures by the case studies. It can be customised according to different legacy systems. This methodology can help software developers and maintainers to reengineer the tightly coupled legacy information systems to the loosely coupled and agile information systems.
|
622 |
Private environments for programsDunn, Alan Mark 25 September 2014 (has links)
Commodity computer systems today do not provide system support for privacy. As a result, given the creation of new leak opportunities by ever-increasing software complexity, leaks of private data are inevitable. This thesis presents Suliban and Lacuna, two systems that allow programs to execute privately on commodity hardware. These systems demonstrate different points in a design space wherein stronger privacy guarantees can be traded for greater system usability. Suliban uses trusted computing technology to run computation-only code privately; we refer to this protection as "cloaking". In particular, Suliban can run malicious computations in a way that is resistant to analysis. Suliban uses the Trusted Platform Module and processor late launch to create an execution environment entirely disjoint from normal system software. Suliban uses a remote attestation protocol to demonstrate to a malware distribution platform that the environment has been correctly created before the environment is allowed to receive a malicious payload. Suliban's execution outside of standard system software allows it to resist attackers with privileged operating system access and those that can perform some forms of physical attack. However, Suliban cannot access system services, and requires extra case-by-case measures to get outside information like the date or host file contents. Nonetheless, we demonstrate that Suliban can run computations that would be useful in real malware. In building Suliban, we uncover which defenses are most effective against it and highlight current problems with the use of the Trusted Platform Module. Lacuna instead aims at achieving forensic deniability, which guarantees that an attacker that gains full control of a system after a computation has finished cannot learn answers to even binary questions (with a few exceptions) about the computation. This relaxation of Suliban's guarantees allows Lacuna to run full-featured programs concurrently with non-private programs on a system. Lacuna's key primitive is the ephemeral channel, which allows programs to use peripherals while maintaining forensic deniability. This thesis extends the original Lacuna work by investigating how Linux kernel statistics leak private session information and how to mitigate these leaks. / text
|
623 |
L’institution des nipûtum dans les royaumes paléo-babyloniens (+/- 2000-1600 av. J.-C.)Scouflaire, Marie-France M-F 28 April 2008 (has links)
Les deux codes de lois de l'époque babylonienne ancienne consacrent plusieurs rubriques à la nipûtum, elles ont été transcrites, traduites et commentées à de multiples reprises. D’autre part, des dizaines de textes éparpillés, auxquels il n'est fait que de vagues allusions dans les commentaires, abordent le même sujet; chaque fois qu'ils sont cités, ils ne le sont que parce qu'ils peuvent éclairer un peu le sens des codes .
Nous avons décidé d'agir en sens contraire de la recherche traditionnelle et de proposer une définition de la nipûtum grâce aux textes de la pratique . Les codes semblent en effet traiter de l'anormal plutôt que du normal . La nipûtum n'y est définie qu'en termes d'abus : saisie non justifiée ou mauvais traitements pouvant entraîner la mort de la personne saisie . De plus, ils ne parlent de la nipûtum qu'en cas de dettes et seulement pour des opérations entre particuliers, mettant face à face un banquier tout puissant et un citoyen pauvre en difficulté .
L'institution des nipûtum se met tout d'abord en valeur par sa grande extension chronologique, elle est présente dès le début des dynasties amorrites jusqu’au dernier roi de Babylone, soit pendant trois siècles . En ce qui concerne la répartition géographique, elle est en usage dans l'ensemble de la Mésopotamie, du nord au sud, de Sippar à Ur, et d'est en ouest, même dans des zones tout à fait éloignées, comme Mari .
La nipûtum se rencontre dans une multitude de faits de société : les dettes, le payement des loyers, les travaux à exécuter, la fourniture de travailleurs, de matériel, les obligations vis-à-vis de l'administration, les attributions de champs comme privilèges … elle peut être revendiquée à peu près dans tous les contextes et tous les faits de la vie privée ou publique . Le terme prend donc un aspect universel dans ses applications .
Cet aspect se constate également dans les catégories de personnes concernées : on passe du petit endetté à l'employé moyen, qui a des problèmes dans les faveurs que lui accorde l'administration, de l’artisan au fonctionnaire royal, sans oublier le soldat ou la princesse, chacun en ce qui concerne ses affaires privées ou ses fonctions, la différence entre les deux n'étant pas toujours aisée à établir. On peut voir la nipûtum affecter différentes catégories de fonctionnaires, du modeste jusqu'au plus élévé : celui qui d'habitude inflige les saisies de nipûtum à ses collègues subit le même sort . Il est aussi intéressant de constater que le marchand, pourvoyeur habituel de garanties nipûtum, y est soumis à son tour, comme revers de ses propres obligations, .
La possibilité de s’octroyer des garanties nipûtum semble donc attribuée à quiconque, dans n'importe quelle situation, dans ses rapports avec ses concitoyens, dans sa fonction, que celle-ci soit ou non liée à l'administration .
Contrairement aux idées reçues, la nipûtum n’est pas seulement constituée d’êtres humains, famille ou esclaves du « débiteur » : divers animaux (le code de Hammu-rabi cite un boeuf) ou des objets provenant du patrimoine du « débiteur » sont attestés .
Nous nous sommes également attaché à essayer de déterminer le moment où entre en action la nipûtum : il apparaît qu'elle intervient uniquement en cas de problème et lorsque quelqu'un s'estime lésé . Dans le domaine des dettes, c'est à l'échéance que le créancier, constatant qu'il n'est pas payé comme il devrait l'être, a recours aux nipûtum .
Dans tous les autres cas, retard pour d'autres payements -loyers de toutes sortes- absence de réponse à une convocation, pour un travail non fait ou mal exécuté, un abus dans l'utilisation de droits, notamment quant à l'usage des terres royales, les garanties nipûtum apparaissent dès qu’on décèle le méfait . Un seul contrat en prévoit l’existence, nous ne pouvons en estimer la portée, exceptionnelle ou survivance unique d’une démarche habituelle. Bref, la nipûtum semble n'exister que lorsqu'une faute a été commise, soit que cette faute est évidente, soit qu'elle a été dénoncée par celui qui s'estime à tort ou à raison lésé dans l'affaire .
|
624 |
Swedish Code of Corporate Governance : A study of the compliance with the code among Swedish listed companiesPersson, Therese, Karsberg, Helena January 2005 (has links)
<p>After several scandals in the US, the focus on corporate governance has increased rapidly and led to implementations of “codes of best practice” in many countries. In 2002, the Swedish government appointed a committee with the purpose to develop a Swedish Code of Corporate Governance. The purpose with the code is to help the Swedish industry to regain its confidence in order to attract capital after the scandals that have occurred. The code shall be implemented by Swedish com-panies listed on the A-list on the Stockholm stock exchange and companies on the O-list with a market value above 3 billion SEK and shall be implemented by the 1:st of July 2005.</p><p>The code is based on the principle “comply or explain” which means that companies do not have to comply with the requirements of the code as long as they explain their reasons why they deviate. The purpose of this thesis is thereby to examine to what extent Swedish companies prepare to comply or are already complying with the requirements of the code and the reasons for possible deviations regarding the level of compliance between the companies. In order to answer the purpose stated, the authors have chosen to use both a quantitative and a qualitative method. The authors have sent surveys to all companies obliged to implement the code in order to find out to what extent the Swedish companies prepare to comply or are already complying with the code today. In order to answer the second research question, why companies prepare to comply, or are complying to different degrees, hypotheses were stated and interviews with five companies listed on the Stockholm stock exchange were made.</p><p>The authors found a high compliance rate among Swedish companies, with a mean of 88,49%. The companies on the A-list are complying to a larger extent than the ones on the O-list. Based on the hypotheses, the authors found that companies with higher turnovers are more likely to comply with the code to a larger extent than companies with lower turnovers. Additional reasons to a high degree of compliance rate with the code, are: the need for resources, the impact of media, the culture and personal values within the organization and the fact that the code does not imply any major changes for the organization. Reasons why companies do not prepare to comply or are already complying to a large extent are: the increased devotion of resources that the implementation requires, the high level of details and the complicated requirements of the code. These last-mentioned factors lead to difficulties to interpret the requirements of the code and increased bureaucracy, which thereby lead to a lower level of compliance.</p>
|
625 |
THE EFFECTIVENESS OF THE READING MISCUE INVENTORY AND THE READING APPRAISAL GUIDE IN GRADUATE READING PROGRAMS (ASSESSMENT, REMEDIAL, TEACHER EDUCATION).LONG, PATRICIA CATHERINE. January 1984 (has links)
The purpose of this study was to examine differences in the effectiveness of two graduate teacher education programs in reading assessment, one group using the Reading Miscue Inventory and the other using one of its simplified forms, the Reading Appraisal Guide. The main question that is answered in this study is whether it is more effective for teachers to be given training in the Reading Miscue Inventory, or is training in the Reading Appraisal Guide sufficient to enable teachers to carry out competent assessments of children's reading ability? In the six months of the study's duration, different types of data were collected. These consisted of assessments of children's taped readings of a story by two groups of teachers before (the pretest) and after (the posttest) their respective training programs; anecdotal records of the teachers' views of the programs and the assessment instruments they were using, and observations of the teachers' reading assessments of children selected by them for their practicum. Quantitative analyses of the pretest and posttest were made; these were based on criteria drawn from the Reading Miscue Inventory manual and the investigator's own miscue analysis of the children's taped readings. They showed that the teachers trained in miscue analysis, as reflected in the Reading Miscue Inventory, were able to make significantly better assessments of children's reading ability than the teachers trained in the Reading Appraisal Guide. In addition to the quantitative analysis, written and oral statements made by the teachers during the pretest, posttest and training programs were subjected to qualitative analysis and comparisons. These indicated that both groups' programs had strengthened the teachers' adherence to the Goodman model of reading, but those trained in the use of the Reading Miscue Inventory developed more effective assessment abilities and were more approving of the instruments they used, than were those trained in the use of the Reading Appraisal Guide. It was concluded that the Reading Miscue Inventory is an appropriate assessment instrument for use in college graduate reading programs. It proved complex and time-consuming to use, but at the same time it enabled teachers to make more accurate, in-depth assessments of children's reading than did the Reading Appraisal Guide. The latter was found to have some serious drawbacks, mostly arising from attempts to make it quicker and easier to use.
|
626 |
SPANISH HERITAGE LANGUAGE SOCIALIZATION PRACTICES OF A FAMILY OF MEXICAN ORIGINDelgado, Maria Rocio January 2009 (has links)
This ethnographic case study describes the patterns of language socialization and literacy/biliteracy practices and the patterns of language choice and language use of a Spanish heritage bilingual family of Mexican origin from the participant perspective, the emic view, and the research perspective, an etic view. This analysis attempts to broaden the knowledge of how Mexican origin families use language at home by demonstrating how literacy/biliteracy practices (i.e., reading, writing and talk/conversation), language choice (i.e., Spanish, English, code-switching (CS)) and language use (i.e., domains) contribute to reinforce, develop or hinder the use of Spanish as a heritage language. Using ethnographic methodology, this study analyzes the participants' naturally occurring language interactions. Socialization and language learning are seen as intricately interwoven processes in which language learners participate actively.The analysis and discussion is presented in two sections: 1) language socialization in conjunction with literacy practices, and 2) language socialization in conjunction with language choice and CS. Language choice and CS are analyzed by means of conversation analysis theory (CA): the analysis of language sequences of the participants' conversation. The description of the domains (i.e., what participants do with each language and the way they use language) constitutes the basis for the analysis.The findings of this study show that language shift to English is imminent in an environment of reduced contact with parents, siblings, and the community of the heritage language group. Understanding which literacy practices are part of the everyday life of Hispanic households is relevant to the implementation of classroom literacy practices.
|
627 |
Code-Switching Patterns in Infant Bilingualism: A Case Study of an Egyptian Arabic-English-Speaking Four-Year-Old Bilingual ChildGamal, Randa January 2007 (has links)
The purpose of this sociolinguistic case study is to analyze the language processes and speech patterns of code-switching of an Egyptian Arabic-English-speaking three-year-old girl named Sara. Sara, who is the daughter of the study's author, has been exposed to and has learned both languages simultaneously since she was nine months old. Family composition played an immense role in the language the parents used with their child and the language the child chose to speak. Sara's parents spoke to her in Arabic since she was born; thus, a one-language household model was used. At the age of nine months, Sara started to attend day care and was exposed to English for the first time. The integral role of the environmental influences of the English language were considered and examined with regard to Sara's language choices within the framework of family gatherings, community settings/activities, and recreation/leisure activities, and the positive influence of these contexts was assessed.Sara facilitated her natural communicative abilities by code-switching lexical items between Arabic and English and vice versa to complete her sentences. Lexical switches including nouns, verbs, and adjectives were the most susceptible to code-switching. In addition, nouns and adjectives were code-switched more than verbs because of the incongruence in verbs between Arabic and English. Sara code-switched depending on the languague abilities of the interlocutor. However, there was no association between Sara's code-switching and the topics of conversation. It was found that the proportion of intersentential code-switching decreased over time and that of intrasentential code-switching increased during the three-year study.
|
628 |
LDPC Coding for Magnetic Storage: Low Floor Decoding Algorithms, System Design and Performance AnalysisHan, Yang January 2008 (has links)
Low-density parity check (LDPC) codes have experienced tremendous popularity due to their capacity-achieving performance. In this dissertation, several different aspects of LDPC coding and its applications to magnetic storage are investigated. One of the most significant issues that impedes the use of LDPC codes in many systems is the error-rate floor phenomenon associated with their iterative decoders. By delineating the fundamental principles, we extend to partial response channels algorithms for predicting the error rate performance in the floor region for the binary-input AWGN channel. We develop three classes of decoding algorithms for mitigating the error floor by directly tackling the cause of the problem: trapping sets. In our experiments, these algorithms provide multiple orders of improvement over conventional decoders at the cost of various implementation complexity increases.Product codes are widely used in magnetic recording systems where errors are both isolated and bursty. A dual-mode decoding technique for Reed-Solomon-code-based product codes is proposed, where the second decoding mode involves maximum-likelihood erasure decoding of the binary images of the Reed-Solomon codewords. By exploring a tape storage application, we demonstrate that this dual-mode decoding system dramatically improves the performance of product codes. Moreover, the complexity added by the second decoding mode is manageable. We also show the performance of this technique on a product code which has an LDPC code in the columns.Run-length-limited (RLL) codes are ubiquitous in today's disk drives. Using RLL codes has enabled drive designers to pack data very efficiently onto the platter surface by ensuring stable symbol-timing recovery. We consider a concatenation system design with an LDPC code and an RLL code as components to simultaneously achieve desirable features such as: soft information availability to the LDPC decoder, the preservation of the LDPC code's structure, and the capability of correcting long erasure bursts.We analyze the performance of LDPC-coded magnetic recording channel in the presence of media noise. We employ advanced signal processing for the pattern-dependent-noise-predictive channel detectors, and demonstrate that a gain of over 1 dB or a linear density gain of about 8% relative to a comparable Reed-Solomon is attainable by using an LDPC code.
|
629 |
Memory Footprint Reduction of Operating System KernelsHe, Haifeng January 2009 (has links)
As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incurs unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems. First, we present kernel code compaction, an automated approach that reduces the code size of an OS kernel statically by removing unused functionality. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximated compilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform and a minimally configured Linux kernel obtains a code size reduction of close to 24%.Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most kernel code is executed infrequently if at all. Our second contribution is on-demand code loading, an automated approach that keeps the rarely used code on secondary storage and loads it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53%with little degradation in performance. Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in an OS kernel. A prototype implementation for the Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5%when running the MediaBench suite while incurring only minimal increases in execution time (1.9%).
|
630 |
On adaptive MMSE receiver strategies for TD-CDMAGarcia-Alis, Daniel January 2001 (has links)
No description available.
|
Page generated in 0.0365 seconds