Spelling suggestions: "subject:"computer atemsystem"" "subject:"computer systsystem""
211 |
Cooperative control of systems with variable network topologiesWhittington, William Grant 20 September 2013 (has links)
Automation has become increasingly prevalent in all forms of society. Activities that are too difficult for a human or to dangerous can be done by machines which do not share those downsides. In addition, tasks can be scheduled more precisely and accurately. Increases in the autonomy have allowed for a new level of tasks which are completed by teams of automated agents rather than a single one, called cooperative control. This has many benefits; but comes at the cost of increased complexity and coordination. The main thrust of research in this field is problem based, considering communication issues as a secondary feature. There is a gap considering problems in which many changes occur as rapidly as communication and the issues that arise as a result. This is the main motivation.
This research presents an approach to cooperative control in highly variable systems and tackles some of the issues present in such a system. One of the most important issues is the communication network itself, which is used as an indicator for how healthy the system is an how well it may react to future changes. Therefore using the network as an input to control allows the system to navigate between conservative and aggressive techniques to improve performance while still maintaining robustness.
Results are based on a test bed designed to simulate a wide variety of problem types based on: network type; numbers of actors; frequency of changes; impact of changes and method of change. The developed control method is compared to the baseline case ignoring cooperation as well as an idealized case assuming perfect system knowledge. The baseline represents sacrifices coordination to achieve a high level of robustness at reduced performance while the idealized case represents the best possible performance. The control techniques developed give a performance at least as good as the baseline case if not better for all simulations.
|
212 |
A comparative study of concept-based and procedural teaching methods in user instruction of the OPAC at the M.L. Sultan Technikon.Choonoo, Pearl. January 1999 (has links)
The purpose of this research was firstly to compare the differences in online performance between two groups trained to use the Online Public Access Catalogue at the M L Sultan Technikon using two different types of instruction, namely the teaching methods of concept-based and procedural instruction. Secondly, the objective of the research was to compare these two teaching methods in relation to first year students at the M L Sultan
Technikon with differing levels of library experience, computer experience and English language experience. To meet the objectives of the research, literature was reviewed and analysed from various sources. Original research was conducted using the method of a quasi-experiment. A random sample of 120 students were split between two teaching conditions, with sixty
participants in a concept-based teaching condition and sixty participants in a procedural teaching condition. Research instruments used were a background questionnaire to collect demographic information, a pre-and post test to evaluate significant differences between the teaching methods, an evaluation questionnaire to collect affective responses, direct observation, and transaction log monitoring of the searches conducted. In a one-hour lecture the concept-based group were taught general search concepts using model-based instruction techniques and the procedural lecture demonstrated methods of searching in a step-by-step fashion. Data analysis made use of Microsoft Access 97 and Excell 97 software to code and verify the data, and the Statistical Package for the Social Sciences
(SPSS), v9.0 to conduct statistical analysis. The research found that first year students were generally inexperienced in the use of the online information retrieval system. The majority of the participants in the study did not have any computer experience, and made use of English as a second language. Others, although not in the majority were found to have low levels of library experience. Performance on pre-tests were generally low for these participants while those who had experience in the use of libraries, computers and who regarded English as a first language were able to make fair use of the system for simple tasks such as author and title. This
suggested that prerequisite competencies needed for online searching were, library literacy, computer literacy and some proficiency in the use of English. Performance on search tasks found no significant differences on simple tasks between the teaching conditions. However, variances in performance as a result of individual differences were found. On difficult tasks participants fared better with concept-based instruction resulting in significant differences in performance. The findings of this research supported the need for online instruction to novice end-users, taking cognisance of the need for suitable venues equipped with adequate hardware, provision of staff, and allocation of sufficient time for such instruction. The research proposes that model-based teaching be encouraged, especially for difficult tasks. In the
decisions made however, instruction must take note of the background of participants. Further proposals for instruction and other related aspects are discussed in the research. / Thesis (Ph.D.)-University of Natal, Pietermaritzburg, 1999.
|
213 |
Les infractions portant atteinte à la sécurité du système informatique d’une entrepriseMaalaoui, Ibtissem 09 1900 (has links)
Les nouvelles technologies de l’information et des communications
occupent aujourd’hui une place importante dans les entreprises, quelle que soit la
taille ou le(s) domaine(s) d’activité de ces dernières. Elles participent de manière
positive au développement de la vie économique. Elles sont toutefois à l’origine
d’une nouvelle forme de criminalité qui menace la sécurité et l’intégrité des
systèmes informatiques dans l’entreprise. Celle-ci est d’une ampleur difficile à
évaluer, mais surtout difficile à maîtriser avec les dispositions législatives déjà en
place, laissant par là même apparaître qu’une adaptation au niveau juridique est
inévitable. Certains pays industrialisés ont ainsi décidé de mettre en place un cadre
juridique adéquat pour garantir aux entreprises la sécurité de leurs systèmes
informatiques. Notre étude va justement porter sur les dispositifs mis en place par
deux systèmes juridiques différents. Forcés de prendre en compte une réalité
nouvelle – qui n’existait pas nécessairement il y a plusieurs années –, la France et le
Canada ont décidé de modifier respectivement leurs codes pénal et criminel en leur
ajoutant des dispositions qui répriment de nouvelles infractions.
À travers cet exposé, nous allons analyser les infractions qui portent atteinte
à la sécurité du système informatique de l’entreprise à la lumière des outils
juridiques mis en place. Nous allons mesurer leur degré d’efficacité face à la réalité
informatique. En d’autres termes, il s’agit pour nous de déterminer si le droit va
répondre ou non aux besoins de l’informatique. / The new information and communication technologies (NICT) currently
play an important role in companies, regardless of their size or field of activity; in
addition they contribute positively to the economy. However, their use has led to
NICT-related criminality, which threatens the security and integrity of the
companies’ computer systems. NICT-related criminality has grown exponentially;
its increase is hard to assess, and especially hard to control using the existing
legislative provisions. Hence, legal adaptations appear unavoidable. Several First
World countries have decided to set up, through different means, an adequate legal
framework to guarantee the security of companies’ computer systems.
Our study will focus precisely on the mechanisms that have been set by two
different legal systems. France and Canada, which had to take into account a new
reality–new to at least some extent–have decided to amend their respective penal
and criminal codes by adding provisions that penalize further infringements. In this
work, we will analyze the crimes that undermine the security of the companies’
computer systems in light of the legal tools in place. We will asess how effectively
they face today’s computer world and will determine whether or not the law will
meet or not the needs of this type of technology.
|
214 |
Implementation of graph manipulation under X Window system environmentHsieh, Chao-Ho January 1992 (has links)
In graph theory graphs are mathematical objects that can be used to model networks, data structures, process scheduling, computations and a variety of other systems where the relations between the objects in the system play a dominant role.We will now consider graphs as mathematically self-contained units with rich structure and comprehensive theory; as models for many phenomena, particularly those arising in computer systems; and as structures which can be processed by a variety of sophisticated and interesting algorithms.For graph theory presentation, we need a very good graphical user interface(GUI) to approach the goal. X Window system is ideally suited for such a purpose. This package program is based on X Window system environment. With this package, we can manipulate graphs by special functions which can put nodes, put edges, delete nodes, delete edges, change the whole graph size, move graph location, and modify edge weights. / Department of Computer Science
|
215 |
Estudo da notificação do óbito infantil em quatro municípios do estado do Piauí nos anos de 2005 e 2006 / Study of notification of death of children in four counties in the state of Piaui in the years 2005 and 2006Silva, Zenira Martins January 2009 (has links)
Made available in DSpace on 2011-05-04T12:36:15Z (GMT). No. of bitstreams: 0
Previous issue date: 2009 / Esta dissertação é apresentada sob forma de artigo e tem por objetivo estudar a notificaçãode óbito de crianças menores de um ano de idade nos municípios de Fronteiras, Monsenhor Gil, Pimenteiras e Simões do estado do Piauí no biênio 2005 e 2006. Foi realizado um estudo de caso de natureza descritiva. O trabalho de campo foi feito por meio da estratégia busca ativa junto às diversas fontes de notificação de óbito oficiais e não oficiais existentes nos municípios. Foram utilizados os dados de óbito infantil do Sistema de Informação sobre Mortalidade (SIM). Buscou-se identificar o percentual de subnotificação do óbito infantil, a contribuição das diversas fontes de notificação e o perfil do óbito infantil. O estudo mostrou uma cobertura de óbito infantil para o total dos municípios de 59, 5 por cento; dentre as unidades classificadas como do sistema de saúde, o hospital foi a principal fonte de notificação para óbitos já notificados no SIM. O PSF e informante chave mostraram-se fontes relevantes para óbitos não notificados ao SIM. É possível apontar que a subnotificação do óbito infantil pode ser atribuída a questões relacionadas à garantia do acesso ao serviço de saúde como também a questões que perpassam o setor saúde, a exemplo, o nível de escolaridade materna. / This dissertation is presented in an article format and has as its objective to study the death
notifications of children younger than one year of age in the cities of Fronteiras, Monsenhor Gil, Pimenteiras and Simões, in the state of Piauí from 2005 to2006. A case study of a descriptive nature was carried out. The field work was done by means of an active strategic search together with the different sources of official death notification and the non-official ones in the cities. The data of infant deaths from the Mortality Information Computer System (SIM) was used. The search to identify the percentage of under notification of infant deaths, the contribution of several sources of notification and the profile of infant deaths was done. The study showed a covering of infant death for all of the cities at 59.5%; among the units classified as the health system, the hospital was the main notification source for deaths notified at SIM. The Family Health Program and the
key informant were the relevant sources for non-notified deaths at SIM. It is possible to
point out that the under notification of infant deaths could be attributed to questions related
to the guarantee of access to health services as well as questions which are outside the health sector, for example, the educational level of the mother.
|
216 |
Um estudo exploratório sobre o uso de diferentes algoritmos de classificação, de seleção de métricas, e de agrupamento na construção de modelos de predição cruzada de defeitos entre projetos / An exploratory study on the use of different classification algorithms, of selection metrics, and grouping to build cross-project defect prediction modelsSatin, Ricardo Francisco de Pierre 18 August 2015 (has links)
Predizer defeitos em projetos de software é uma tarefa complexa, especialmente para aqueles projetos que estão em fases iniciais do desenvolvimento por, frequentemente, disponibilizarem de poucos dados para que modelos de predição sejam criados. A utilização da predição cruzada de defeitos entre projetos é indicada em tal situação, pois permite reaproveitar dados de projetos similares. Este trabalho propõe um estudo exploratório sobre o uso de diferentes algoritmos de classificação, seleção de métricas, e de agrupamento na construção de um modelo de predição cruzada de defeitos entre projetos. Esse modelo foi construído com o uso de uma medida de desempenho, obtida com a aplicação de algoritmos de classificação, como forma de encontrar e agrupar projetos semelhantes. Para tanto, foi estudada a aplicação conjunta de 8 algoritmos de classificação, 6 de seleção de atributos, e um de agrupamento em um conjunto de dados com 1283 projetos, resultando na construção de 61584 diferentes modelos de predição. Os algoritmos de classificação e de seleção de atributos tiveram seus desempenhos avaliados por meio de diferentes testes estatísticos que mostraram que: o Naive Bayes foi o classificador de melhor desempenho, em comparação com os outros 7 algoritmos; o par de algoritmos de seleção de atributos que apresentou melhor desempenho foi o formado pelo avaliador de atributos CFS e método de busca Genetic Search, em comparação com outros 6 pares. Considerando o algoritmo de agrupamento, a presente proposta parece ser promissora, uma vez que os resultados obtidos mostram evidências de que as predições usando agrupamento foram melhores que as predições realizadas sem qualquer agrupamento por similaridade, além de mostrar a diminuição do custo de treino e teste durante o processo de predição. / To predict defects in software projects is a complex task, especially for those projects that are in early stages of development by, often, providing few data for prediction models. The use of cross-project defect prediction is indicated in such a situation because it allows reuse data of similar projects. This work proposes an exploratory study on the use of different classification algorithms, of selection metrics, and grouping to build cross-project defect predictions models. This model was built using a performance measure, obtained by applying classification algorithms aim to find and group similar projects. Therefore, it was studied the application of 8 classification algorithms, 6 feature selection, and a cluster in a data set with 1283 projects, resulting in the construction of 61584 different prediction models. The classification algorithms and feature selection had their performance evaluated through different statistical tests showed that: the Naive Bayes was the best performance classifier, as compared with other 7 algorithms; the pair of feature selection algorithms that performed better was formed by CFS attribute evaluator and search method Genetic Search, compared with 6 other pairs. Considering the clustering algorithm, this proposal seems to be promising, since the results shows evidence that the predictions were best grouping using the predictions performed without any similarity clustering, and shows the decrease in training cost and testing during the prediction process.
|
217 |
Grades computacionais baseadas em modelos economicos / Grid computing based on economic modelsRosa, Ricardo da 15 August 2018 (has links)
Orientador: Maria Beatriz Felgar de Toledo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T11:55:40Z (GMT). No. of bitstreams: 1
Rosa_Ricardoda_M.pdf: 1883897 bytes, checksum: 06a13cb1deacddc65e4123779e1ac975 (MD5)
Previous issue date: 2010 / Resumo: Computação em grade e um paradigma que permite o compartilhamento de recursos heterogêneos, geograficamente distribuídos e sob administrações independentes. Esse compartilhamento deve ser realizado para otimizar a utilização de recursos e atender os requisitos de qualidade de serviço. Modelos econômicos podem ser aplicados para fornecer uma alocação justa desses recursos e incentivar a disponibilização de um maior número de recursos na grade. Nesta dissertação, será discutida uma arquitetura de grade baseada em modelos econômicos, em especial, os vários modelos de leiloes para permitir negociação entre um fornecedor e vários consumidores de recursos. Serú realizada uma anúlise sobre as diversas modalidades de leilão para verificar o comportamento de consumidores e fornecedores de recursos em um ambiente de grade / Abstract: Grid computing is a paradigm that allows the sharing of heterogeneous resources, geographically distributed and under independent administration. Sharing must be done to optimize the use of resources and meet quality of service requirements. Economic models can be applied to provide a fair allocation of these resources and to promote the entry of a greater number of resources into a grid. In this dissertation, a grid architecture based on economic models will be discussed, in particular, several auction models to allow negotiation between a provider and many consumer of resources. Different types of auction models will be analyzed to verify the behavior of consumers and providers of resources in a grid environment / Mestrado / Sistemas de Computação / Mestre em Ciência da Computação
|
218 |
PRACTICAL CONFIDENTIALITY-PRESERVING DATA ANALYTICS IN UNTRUSTED CLOUDSSavvas Savvides (9113975) 27 July 2020 (has links)
<div>
<div>
<div>
<p>Cloud computing offers a cost-efficient data analytics platform. This is enabled by constant innovations in tools and technologies for analyzing large volumes of data through distributed batch processing systems and real-time data through distributed stream processing systems. However, due to the sensitive nature of data, many organizations are reluctant to analyze their data in public clouds. To address this stalemate, both software-based and hardware-based solutions have been proposed yet all have substantial limitations in terms of efficiency, expressiveness, and security. In this thesis, we present solutions that enable practical and expressive confidentiality- preserving batch and stream-based analytics. We achieve this by performing computations over encrypted data using Partially Homomorphic Encryption (PHE) and Property-Preserving Encryption (PPE) in novel ways, and by utilizing remote or Trusted Execution Environment (TEE) based trusted services where needed.</p><p><br></p><p>We introduce a set of extensions and optimizations to PHE and PPE schemes and propose the novel abstraction of Secure Data Types (SDTs) which enables the application of PHE and PPE schemes in ways that improve performance and security. These abstractions are leveraged to enable a set of compilation techniques making data analytics over encrypted data more practical. When PHE alone is not expressive enough to perform analytics over encrypted data, we use a novel planner engine to decide the most efficient way of utilizing client-side completion, remote re-encryption, or trusted hardware re-encryption based on Intel Software Guard eXtensions (SGX) to overcome the limitations of PHE. We also introduce two novel symmetric PHE schemes that allow arithmetic operations over encrypted data. Being symmetric, our schemes are more efficient than the state-of-the-art asymmetric PHE schemes without compromising the level of security or the range of homomorphic operations they support. We apply the aforementioned techniques in the context of batch data analytics and demonstrate the improvements over previous systems. Finally, we present techniques designed to enable the use of PHE and PPE in resource-constrained Internet of Things (IoT) devices and demonstrate the practicality of stream processing over encrypted data.</p></div></div></div><div><div><div>
</div>
</div>
</div>
|
219 |
Efficient and Robust Deep Learning through Approximate ComputingSanchari Sen (9178400) 28 July 2020 (has links)
<p>Deep
Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range
of machine learning tasks involving image, video, speech and text analytics,
and are deployed in numerous widely-used products and services. Improvements in
the capabilities of hardware platforms such as Graphics Processing Units (GPUs)
and specialized accelerators have been instrumental in enabling these advances
as they have allowed more complex and accurate networks to be trained and
deployed. However, the enormous computational and memory demands of DNNs
continue to increase with growing data size and network complexity, posing a
continuing challenge to computing system designers. For instance,
state-of-the-art image recognition DNNs require hundreds of millions of
parameters and hundreds of billions of multiply-accumulate operations while
state-of-the-art language models require hundreds of billions of parameters and
several trillion operations to process a single input instance. Another major
obstacle in the adoption of DNNs, despite their impressive accuracies on a range
of datasets, has been their lack of robustness. Specifically, recent efforts
have demonstrated that small, carefully-introduced input perturbations can
force a DNN to behave in unexpected and erroneous ways, which can have to
severe consequences in several safety-critical DNN applications like healthcare
and autonomous vehicles. In this dissertation, we explore approximate computing
as an avenue to improve the speed and energy efficiency of DNNs, as well as
their robustness to input perturbations.</p>
<p> </p>
<p>Approximate
computing involves executing selected computations of an application in an
approximate manner, while generating favorable trade-offs between computational
efficiency and output quality. The intrinsic error resilience of machine learning
applications makes them excellent candidates for approximate computing, allowing
us to achieve execution time and energy reductions with minimal effect on the
quality of outputs. This dissertation performs a comprehensive analysis of
different approximate computing techniques for improving the execution efficiency
of DNNs. Complementary to generic approximation techniques like quantization,
it identifies approximation opportunities based on the specific characteristics
of three popular classes of networks - Feed-forward Neural Networks (FFNNs),
Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary
considerably in their network structure and computational patterns.</p>
<p> </p>
<p>First, in
the context of feed-forward neural networks, we identify sparsity, or the presence
of zero values in the data structures (activations, weights, gradients and errors),
to be a major source of redundancy and therefore, an easy target for
approximations. We develop lightweight micro-architectural and instruction set
extensions to a general-purpose processor core that enable it to dynamically
detect zero values when they are loaded and skip future instructions that are
rendered redundant by them. Next, we explore LSTMs (the most widely used class
of RNNs), which map sequences from an input space to an output space. We
propose hardware-agnostic approximations that dynamically skip redundant
symbols in the input sequence and discard redundant elements in the state
vector to achieve execution time benefits. Following that, we consider SNNs,
which are an emerging class of neural networks that represent and process
information in the form of sequences of binary spikes. Observing that spike-triggered
updates along synaptic connections are the dominant operation in SNNs, we
propose hardware and software techniques to identify connections that can be
minimally impact the output quality and deactivate them dynamically, skipping any
associated updates.</p>
<p> </p>
<p>The
dissertation also delves into the efficacy of combining multiple approximate computing
techniques to improve the execution efficiency of DNNs. In particular, we focus
on the combination of quantization, which reduces the precision of DNN data-structures,
and pruning, which introduces sparsity in them. We observe that the ability of
pruning to reduce the memory demands of quantized DNNs decreases with precision
as the overhead of storing non-zero locations alongside the values starts to
dominate in different sparse encoding schemes. We analyze this overhead and the
overall compression of three different sparse formats across a range of
sparsity and precision values and propose a hybrid compression scheme that
identifies that optimal sparse format for a pruned low-precision DNN.</p>
<p> </p>
<p>Along with
improved execution efficiency of DNNs, the dissertation explores an additional
advantage of approximate computing in the form of improved robustness. We
propose ensembles of quantized DNN models with different numerical precisions as
a new approach to increase robustness against adversarial attacks. It is based on
the observation that quantized neural networks often demonstrate much higher robustness
to adversarial attacks than full precision networks, but at the cost of a substantial
loss in accuracy on the original (unperturbed) inputs. We overcome this limitation
to achieve the best of both worlds, i.e., the higher unperturbed accuracies of
the full precision models combined with the higher robustness of the low
precision models, by composing them in an ensemble.</p>
<p> </p>
<p><br></p><p>In
summary, this dissertation establishes approximate computing as a promising direction
to improve the performance, energy efficiency and robustness of neural networks.</p>
|
220 |
FORENSICS AND FORMALIZED PROTOCOL CUSTOMIZATION FOR ENHANCING NETWORKING SECURITYFei Wang (11523058) 22 November 2021 (has links)
<div>Comprehensive networking security is a goal to achieve for enterprise networks. In forensics, the traffic analysis, causality dependence in intricate program network flows is needed in flow-based attribution techniques. The provenance, the connection between stealthy advanced persistent threats (APTs) and the execution of loadable modules is stripped because loading a module does not guarantee an execution. The reports of common vulnerabilities and exposures (CVE) demonstrate that lots of vulnerabilities have been introduced in protocol engineering process, especially for the emerging Internet-of-Things (IoT) applications. A code generation framework targeting secure protocol implementations can substantially enhance security.</div><div>A novel automaton-based technique, NetCrop, to infer fine-grained program behavior by analyzing network traffic is proposed in this thesis. Based on network flow causality, it constructs automata that describe both the network behavior and the end-host behavior of a whole program to attribute individual packets to their belonging programs and fingerprint the high-level program behavior. A novel provenance-oriented library tracing system, Lprov, which enforces library tracing on top of existing syscall logging based provenance tracking approaches is investigated. With the dynamic library call stack, the provenance of implicit library function execution is revealed and correlated to system events, facilitating the locating and defense of malicious libraries. The thesis presents ProFactory, in which a protocol is modeled, checked and securely generated, averting common vulnerabilities residing in protocol implementations.</div>
|
Page generated in 0.0776 seconds