• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Framework Based On Continuous Security Monitoring

Erturk, Volkan 01 December 2008 (has links) (PDF)
Continuous security monitoring is the process of following up the IT systems by collecting measurements, reporting and analysis of the results for comparing the security level of the organization on continuous time axis to see how organizational security is progressing in the course of time. In the related literature there is very limited work done to continuously monitor the security of the organizations. In this thesis, a continuous security monitoring framework based on security metrics is proposed. Moreover, to decrease the burden of implementation a software tool called SecMon is introduced. The implementation of the framework in a public organization shows that the proposed system is successful for building an organizational memory and giving insight to the security stakeholders about the IT security level in the organization.
22

Automated Event-driven Security Assessment

January 2014 (has links)
abstract: With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches. / Dissertation/Thesis / M.S. Computer Science 2014
23

La dynamique du carbone inorganique dans le continuum sol-épikarst-cavité du site de la Grotte de Lascaux (Dordogne, France) : apports des monitorings hydrogéochimique et microclimatique continus pour l’étude de l’aérologie et le développement d’une méthode de simulation des processus calco-carboniques aux parois / Inorganic carbon dynamics into the soil-epikarst-cavity continuum of the Lascaux Cave (Dordogne, France)

Houillon, Nicolas 13 December 2016 (has links)
Depuis son invention en 1940 mais surtout consécutivement à sa fermeture au public en 1963, la conservation de la Grotte de Lascaux se base entre autres sur la compréhension de ses interactions avec le massif karstique environnant et notamment les processus siégeant dans l’épikarst et la zone de transmission superficielle. Ces travaux de thèse se sont donc attachés à comprendre la dynamique du CO2 dans le continuum sol-épikarst-cavité afin d’en évaluer les potentiels impacts sur la conservation des parois. Nous bénéficions à Lascaux d’une fenêtre d’observation sur les écoulements provenant de l’épikarst sus-jacent dans le SAS 1 de la cavité, mais aussi d’une instrumentation conséquente. Elle permet l’acquisition de nombreuses séries de données temporelles des paramètres microclimatiques ainsi que des teneurs en CO2 de l’air en différents points de la cavité ou encore du débit de l’émergence épikarstique depuis le début des années 2000.Une première partie de l’étude est consacrée à la caractérisation de la dynamique du CO2 dans le contexte d’un épikarst sous couverture pédologique. A cette fin, une parcelle expérimentale est instrumentée afin d’effectuer un suivi des paramètres hydroclimatiques et des teneurs en CO2 à différentes profondeurs. Des périodes de recharge (accumulation) et de vidange (émanations vers l’atmosphère) du CO2 de l’épikarst superficiel sont démontrées tout comme la constitution d’un stock de CO2 peu variable dans l’épikarst subsuperficiel. La compréhension de ces différents mécanismes aboutit à un schéma général de la dynamique du CO2 dans l’épikarst.Cette dynamique est étudiée dans la Grotte de Lascaux au cours d’une seconde partie à partir des séries temporelles des paramètres microclimatiques et des teneurs en CO2, mais aussi du signal isotopique en 13C. Il est alors démontré que les flux de CO2 entrant dans la cavité proviennent de trois origines distinctes : l’atmosphère (entrée), l’épikarst superficiel (Galerie Mondmilch et Salles Ensablées) et le massif (éboulis du Puits du Sorcier). Parallèlement, deux régimes aérologiques responsables de la répartition spatio-temporelle des teneurs en CO2 dans la cavité sont observés : stratification et thermoconvections. Ils sont les principaux responsables de la dynamique du CO2 dans la Grotte de Lascaux du fait des faibles échanges entre cette dernière et l’atmosphère comparativement à d’autres cavités karstiques de la région. Enfin, l’impact du dispositif du pompage de l’air sur l’aérologie et la dynamique du CO2 dans la Grotte de Lascaux est évalué. La comparaison de ces dynamiques avec et sans extraction de l’air de la cavité conduit à la création de schémas conceptuels de la dynamique du CO2 dans la Grotte Lascaux.L’étude des conditions d’écoulement dans l’épikarst de la Grotte de Lascaux, troisième partie de ces travaux, a été effectuée à partir d’un suivi en continu des débits, paramètres physico-chimiques et de la fluorescence naturelle de l’eau. L’analyse des séries temporelles de ces traceurs naturels conduit caractériser de façon détaillée les conditions d’écoulement et notamment l’importance de la teneur en eau de l’épikarst sur la taille zone d’alimentation et les types d’eau arrivant à l’exutoire. Parallèlement, l’impact de ces conditions d’écoulement sur les équilibres calco-carboniques des eaux arrivant dans la cavité est analysé.Enfin, les connaissances acquises sont appliquées pour déterminer l’impact potentiel en continu des eaux (condensation et exfiltration) présentes aux parois ornées de la cavité. A cette fin, une méthodologie d’estimation de la masse de calcite potentiellement précipitée par les eaux d’exfiltration et dissoute par les eaux de condensation basée sur des simulations hydrogéochimiques est développée. Son application à la paroi gauche de la Salle de la Taureaux en contextes de pompage et naturel conduit à l’évaluation de l’impact potentiel du pompage mais aussi de l’aérologie de la cavité sur la conservation des parois. / Since its invention in 1940 but especially as a result of its closure to the public in 1963, the preservation of the Cave of Lascaux bases itself among others on the understanding of its interactions with the surrounding karstic massif in particular the processes sitting in the épikarst and the zone of superficial transmission. That is why this thesis research attempted to understand the dynamics of the CO2 in the continuum soil-epikarst-cave to estimate the potential impacts on the preservation of walls. We benefit in Lascaux of an observation window on the flows resulting from the epikarst known emergence in the SAS 1 of the cavity, but also the consequent instrumentation. It allows the acquisition of numerous time series data of the microclimatic parameters, carbon dioxide partial pressures at different points of the cave and the discharge of the epikarstic emergence since the beginning of the century.A first part of the study is dedicated to the characterization of the dynamics of the CO2 in the context of an epikarst under soil cover. To this end, an experimental parcel is instrumented to follow the hydroclimatic parameters and the contents in PCO2 at various depths. Periods of recharge (accumulation) and draining (emanations towards the atmosphere) of the superficial epikarst CO2 are highlighted when the constitution of a low variable CO2 stock is observed in the subsuperficiel epikarst. The understanding of these various mechanisms ends in a conceptual scheme of the CO2 dynamics in the epikarst.In a second part, this dynamic is studied in the Cave of Lascaux from the temporal series of the microclimatic parameters and the contents in CO2, but also the δ13CCO2. It is then demonstrated that the flows of CO2 entering the cavity result from three different origins: the atmosphere (entrance), the superficial epikarst (Mondmilch Gallery and Silted-up Rooms) and the massif (screw of the Shaft of the Sorcerer). At the same time, two aerological regimes responsible for the spatiotemporal distribution of the PCO2 in the cavity are observed: stratification and thermoconvection. They are the main mechanisms responsible for the dynamics of the CO2 in the Cave of Lascaux because of the low exchanges with the atmosphere. Finally, the impact of the pumping of the air on the aerology and the dynamics of the CO2. The comparison of these dynamics with and without extraction of the air of the cavity leads to the creation of conceptual schemes of the dynamics of the CO2 in the Cave Lascaux.The study of the flowing conditions in the epikarst of the Cave of Lascaux, the third part of these works, was made from a continuous monitoring of the discharge, physico-chemical parameters and the natural fluorescence of the water. The analysis of the temporal series of these natural tracers leads to characterize in a detailed way the flowing conditions and the importance of the moisture content of the epikarst on the size of the recharge area and the types of water arriving at the emergence. In parallel, the impact of these conditions on the calco-carbonic balances of waters arriving in the cavity is analyzed.Finally, the acquired knowledge are applied to determine the potential continuous impact of the waters (condensation and exfiltration) present at the adorned walls of the cave. To this end, a methodology of estimation of the mass of calcite potentially precipitated by exfiltration and dissolved by condensation based on hydrogeochemical simulations is developed. Its application to the left wall of the Hall of the Bulls with and without pumping leads to the evaluation of the potential impact of this device but also the aerology of the cavity on the preservation of walls.
24

Vliv edukačních pobytů na kompenzaci diabetu 1. typu / The effectiveness of diabetes camp as a treatment for diabetes type 1

Hásková, Aneta January 2017 (has links)
Introduction: Majority of type I diabetes (type I DM) patients do not reach satisfactory levels of compensation regardless of the advances made in the available treatment. One of the basic pillars of successful type I DM treatment is thorough education. Objective: The aim of this thesis was to describe changes in glycosylated haemoglobin (HbA1c) in patients that completed a four day long educational program. Method: The retrospective analysis evaluated 40 patients with type I DM (age 32 y.o. ± 13, HbA1c before the program 67,1 mmol/mol ± 11,75, diagnosed with DM for 12,5 years ± 7,01). HbA1c was measured before the educational program and then again in a period of 3, 6, 12 and 24 months after the program completion. The program provided the patients with classes focusing on carb counting, flexible insulin dosing, effective ways to manage hypoglycemia and physical activity. Statistical data were obtained by non-parametric tests. (Kruskal-Wallis, ANOVA-repeated measures). Results: Three months after the program completion, a significant drop in HbA1 levels could be observed (67,1 mmol/mol ± 11,75 vs. 60,2 mmol/mol ± 9,52; p=0,0093). This improvement was consistently observed after the period of 6 (59,7 mmol/mol ± 9,59; p=0,0174), 12 (56,5 mmol/mol ± 9,02; p=0,0006) and 24 (57,6 mmol/mol ± 8,43;...
25

Une méthode de test fonctionnel en-ligne basée sur une approche de monitorage distribuée continue appliquée aux systèmes communicants / A novel online functional testing methodology based on a fully distributed continuous monitoring approach applied to communicating systems

Alvarez Aldana, José Alfredo 28 September 2018 (has links)
Les réseaux MANET représentent un domaine important de recherche en raison des nombreuses opportunités découlant des problématiques et des applications inhérentes à ce type de réseau. Les problématiques les plus récurrentes sont la mobilité, la disponibilité ainsi que les ressources limitées. Un intérêt bien connu dans les réseaux et donc dans les MANET est de monitorer les propriétés de ce réseau et de ses nœuds. Les contraintes des MANET peuvent avoir un impact significatif sur les efforts mis en œuvre pour les monitorer. La mobilité et la disponibilité peuvent créer des résultats incomplets pour le monitorage. Les propriétés usuelles utilisées en monitorage sont simples, comme notamment la consommation moyenne du processeur, la bande passante moyenne, etc. De plus, l'évolution des réseaux a conduit à un besoin croissant d'examiner des comportements plus complexes, dépendants et imbriqués. La littérature indique que la précision des valeurs obtenues par monitorage et donc des approches n'est pas fiable et difficile à atteindre en raison des propriétés dynamiques du MANET. Nous proposons donc des architectures de surveillance décentralisées et distribuées qui reposent sur de multiples points d'observation. L'approche décentralisée combine des algorithmes dits hiérarchiques et de ‘gossip’ pour fournir une approche de monitorage efficace. Grâce à des expérimentations approfondies, nous avons conclu que même si nous étions en mesure d'atteindre d’excellentes performances, la fragmentation du réseau a toujours un impact sévère sur la méthodologie mise en place. Essayant d'améliorer notre technique, nous avons proposé une approche distribuée pour améliorer l'efficacité et la précision globale.Il fournit un mécanisme de consensus qui lui permet d'agréger de nombreux résultats fournis par plusieurs nœuds et fournit un résultat plus significatif et plus précis. Nous soutenons notre proposition avec de nombreuses définitions mathématiques qui modélisent les résultats locaux pour un seul nœud et les résultats globaux pour le réseau. Nos expériences ont été évaluées avec un émulateur construit en interne qui s'appuie sur Amazon Web Services, NS-3, Docker et GoLang avec un nombre variable de nœuds, la taille du réseau, sa densité, la vitesse des nœuds, les algorithmes de mobilité et les délais. Grâce à cet émulateur, nous avons pu analyser plusieurs aspects en fournissant des testbeds reproductibles, documentés et accessibles. Nous avons obtenu des résultats prometteurs pour les deux approches, et surtout pour l'approche distribuée en particulier en ce qui concerne la précision des valeurs obtenues par monitorage / MANETs represent a significant area of network research due to the many opportunities derived from the problematics and applications. The most recurring problematics are the mobility, the availability and also the limited resources. A well-known interest in networks and therefore in MANETs is to monitor properties of the network and nodes. The problematics of the MANETs can have a significant impact on the monitoring efforts. Mobility and availability can create incomplete results for the monitoring. The usual properties discussed in monitoring are simple ones, e.g., average CPU consumption, average bandwidth and so on. Moreover, the evolution of networks has led to an increasing need to examine more complex, dependent and intertwined behaviors. The literature states that accuracy of the approaches is not reliable and difficult to achieve due to the dynamic properties of the MANET. Therefore, we propose a decentralized and distributed monitoring architecture that rely on multiple points of observation. The decentralized approach combines gossip and hierarchical algorithms to provide an effective monitoring approach. Through extensive experimentation, we concluded that although we were able to achieve exceptional performance, network fragmentation still has a harsh impact on the approach. Trying to improve our approach, we proposed a distributed approach, relying on stronger bedrock to enhance the overall efficiency and accuracy. It provides a consensus mechanism that allows it to aggregate and provides a more meaningful and accurate result. We support our proposal with numerous mathematical definition that models local results for a single node and global results for the network. Our experiments were evaluated with an emulator built in-house that relies on Amazon Web Services, NS-3, Docker and GoLang with varying number of nodes, network size, network density, speed, mobility algorithms and timeouts. Through this emulator, we were able to analyze multiple aspects of the approaches by providing a repeatable, documented and accessible test beds. We obtained promising results for both approaches, but for the distributed approach, especially regarding accuracy
26

Continuous Monitoring As A Solution To The Large Sample Size Problem In Occupational Exposure Assessment

January 2014 (has links)
acase@tulane.edu
27

Real life analysis of myoelectric pattern recognition using continuous monitoring

Ahlberg, Johan January 2016 (has links)
The use of non-invasive signal acquisition methods is today the standard for testing pattern recognition algorithms in prosthetic control. Such research had shown consecutively high performance on both prerecorded and real time data, yet when tested in real life they deteriorate. To investigate why, the author who is a congenital amputee, wore a prosthetic system utilizing pattern recognition control on a daily basis for a five-day period. The system generated one new classification every 50 ms and movement execution was made continuously; for classifying open/close; and by winning a majority vote; for classifying side grip, fine grip and pointer. System data was continuously collected and errors were registered through both a manual and an automatic log system. Calculations on extracted data show that grip classifications had an individual accuracy of 47%- 70% while open/close got 95%/98%, but if classified according to a majority vote, grips increased their accuracy to above 90% while open/close dropped to 80%. The conclusion was that majority vote might help complex classifications, like fine grips, while simpler proportional movements is exacerbated by majority voting. Major error sources were identified as signal similarities, electrode displacements and socket design. After the daily monitoring ended the systems functionality was tested using the "Assessment of Capacity for Myoelectric Control". The ACMC results showed that the system has similar functionality to commercial threshold control and thus is a possible viable option for both acquired and congenital amputees. / Användningen av icke-invasiva signalavläsningsmetoder är för nuvarande standarden inom utvärderingar av mönsterigenkännings-algoritmer för proteskontroll. Forskning inom området har konsekvent visat på hög prestanda för både ansamlat och realtids data, men när algoritmerna testas i verkliga livet fungerar de ej väl. För att undersöka varför har författaren, som har en kongenital amputation, burit en protes vilken använder mönsterigenkänningskontroll i sitt vardagliga liv under en femdagars period. Systemet genererade en ny klassificering var 50 ms och rörelse-utförande skedde antingen kontinuerligt; för öppna/stäng; eller genom att vinna en majoritetsröstning, för att klassificera greppen sidogrepp, fingrepp samt peka. Data insamlades kontinuerligt och felklassificeringar registrerades genom både ett manuellt och ett automatiskt markeringssystem. Beräkningar på insamlade data visade att för grepp låg den individuella träffsäkerheten på 47%- 70% medan öppna/stäng var 95%/98%, men om data grupperades ökade träffsäkerheten för greppen till 90% medan för öppna/stäng minskade den till 80%. Slutsatsen blev då att majoritetsröstning hjälper mer komplexa rörelser som grepp, men är hindrande för mer väldefinierade proportionella öppna/stäng rörelser. De största felkällorna identifierades som likheter i signaler, elektrodavbrytningar och design av proteshylsan. Efter slutförd daglig övervakning undersöktes funktionaliteten hos systemet med hjälp av funktionalitetstestet "Assessment of Capacity for Myoelectric Control". ACMC testen visade att systemet hade liknande funktionalitet som kommersiell tröskelkontrol och därmed kan ses som ett möjligt alternativ för kontroll, både hos dem med förvärvade och kongenitala amputationer.
28

Dynamique des effluents et des contaminants associés au système d’assainissement de la Communauté d’Agglomération de Pau Pyrénées (CDAPP). / Wastewater and contaminants dynamic in CDAPP (Pau urban community) sewer system

Bersinger, Thomas 10 December 2013 (has links)
L’optimisation du système d’assainissement et la réduction des rejets d’eaux résiduaires urbaines non traitées est devenue un enjeu majeur pour de nombreuses collectivités dans le but d’atteindre les objectifs de qualité des milieux aquatiques fixés par la Directive Cadre européenne sur l’Eau (DCE 2000/60/CE). Pour cela, une parfaite connaissance du système d’assainissement est nécessaire. L’objectif de cette thèse, financée par la CDAPP et l’Agence de l’Eau Adour Garonne, était l’étude de la dynamique du système d’assainissement de la CDAPP et de sa contribution sur les flux de polluants dans le milieu récepteur (le Gave de Pau). La première étape du travail a été consacrée à la caractérisation hydraulique et physicochimique du système d’assainissement par temps sec et par temps de pluie. Une étude hydraulique a été tout d’abord réalisée et a permis de mieux appréhender la dynamique des déversements via les déversoirs d’orage (DO) en fonction de la nature des évènements pluvieux. D’autre part, la caractérisation physico chimique des eaux usées (matières en suspension ou MES, demande chimique en oxygène ou DCO, métaux, hydrocarbures aromatiques polycycliques) a mis en évidence que pour l’ensemble de ces paramètres, une nette augmentation des flux par temps de pluie était observée en particulier en début d’événement (augmentation d’un facteur de 2 à 10). Ce phénomène s’explique par l’apport de polluant par les eaux de ruissellement et l’érosion des dépôts accumulés par temps sec dans les réseaux. Seul l’azote total se comporte différemment puisqu’il est majoritairement présent sous forme dissoute. Afin de mieux appréhender la dynamique des paramètres polluants réglementaires (MES, DCO et azote), un suivi haute fréquence (au pas de temps de cinq minutes) a été mis en place durant un an à l’aide de sondes de turbidité et de conductivité. Ce suivi en continu constitue la troisième partie de ce travail. Des corrélations (r² ≈ 0,9) ont été établies entre d’une part, les paramètres polluants DCO et MES, et la turbidité et d’autre part, entre la conductivité et l’azote total. Ces enregistrements ont permis une meilleure compréhension du fonctionnement du système d’assainissement : mise en évidence du phénomène de first flush, estimation des flux polluants déversés via les DO, étude des phénomènes de stockage dans les réseaux. La dernière partie de cette thèse vise à l’étude de la contribution des rejets d’assainissement dans le milieu récepteur. Elle a permis de démontrer la contribution modérée du rejet de sortie de STEP (entre 1 et 15 %) par temps sec. Par temps de pluie, la contribution du système d’assainissement via les DO est extrêmement variable suivant les conditions hydro-climatiques (de < 1 % à plus de 50 %). Ce travail a permis d’une part d’apporter des résultats utilisables par le gestionnaire de l’assainissement pour optimiser la gestion des eaux usées de la CDAPP. D’autre part, ce travail apporte des résultats plus fondamentaux relatifs à une meilleure connaissance de la dynamique hydrologique et physicochimique des eaux résiduaires urbaines et des polluants associés tels que la mise en évidence, à l’aide d’outils statistiques, des paramètres influençant les déversements et les concentrations en polluants par temps de pluie. / Optimization of sewer system and reduction of untreated wastewater discharges has become a key issue for many communities in order to achieve the good quality of aquatic environments set by the European Framework Directive (WFD 2000/60/EC). For this, a perfect knowledge of sanitation is required. The objective of this thesis, funded by CDAPP and Adour Garonne Water Agency, was the study of the dynamics of the CDAPP sanitation and its contribution to the pollutants fluxes in the receiving environment (the river Gave de Pau). The first step of the work was devoted to the hydraulic and physicochemical wastewater characterization during dry and wet weather. A hydraulic study was first carried and helped to better understand the dynamics of discharges through the combined sewer overflow (CSO) according to the rainfall events characteristics. On the other hand, the physico-chemical characterization of wastewater (suspended solids or TSS, chemical oxygen demand or COD, metals, polycyclic aromatic hydrocarbons) showed that for all these parameters, a clear increase of pollutant fluxes was observed at the beginning of the rainfall event (increase by a factor of 2 to 10). This phenomenon is explained by the runoff contribution and erosion of sediments accumulated in the networks during dry weather periods. Only total nitrogen behaves differently because it’s mostly dissolved. To better understand the dynamics of pollutants parameters (TSS, COD and nitrogen), high frequency monitoring (every five minutes) has been established for one year with turbidity and conductivity sensors. This continuous monitoring is the third part of this work. Correlation functions (r² ≈ 0.9) were found between, the pollutant parameters COD and TSS, and turbidity, and secondly, between conductivity and total nitrogen. These records allowed a better understanding of sanitation system: highlighting the first flush phenomenon, estimation of pollutant loads discharged by CSO, study of storage networks phenomenon. The last part of this thesis aims to study the contribution of wastewater discharges to the receiving environment. It demonstrated the moderate contribution of rejection output STEP (between 1% and 15%) in dry weather. In rainy weather, the contribution of sanitation through CSO is extremely variable depending on the hydro-climatic conditions (<1% to over than 50%). This work has led to provide usable results for the sanitation manager to optimize CDAPP wastewater treatment. Moreover, this work provides most fundamental results for a better understanding of the hydrological and physicochemical dynamics of urban wastewater and associated pollutants such as highlighting, using statistical tools, the parameters influencing pollutant concentrations during rainfall events.
29

Continuous Auditing : Internal Audit at a Crossroads?

Andersson Skantze, Joel January 2017 (has links)
Purpose – It is argued that traditional audit methods are becoming outdated in terms of delivering sufficient assurance on business objectives, whereby, a paradigm shift towards continuous auditing (CA) is proposed and perceived as necessary both by academia, standard-setting groups, and business society. However, the practical prevalence of CA is insignificant in relation to the expectations depicted. Therefore, the purpose of this paper is to examine why this is the case by means of investigating what factors that motivate an adoption of CA amongst various internal audit functions (IAFs). Design/methodology/approach – The study draws on the Technology Acceptance Model (TAM), and data are obtained through semi-structured interviews capturing internal auditors’ attitude towards CA, and what factors that influence an adoption. Findings – There is a shattered view on CA amongst IAFs, where the proponents embrace it as a set of value-adding methodologies whilst the opponents argue that it falls outside their responsibility and threaten the independence of the function. Thus, why CA has not been leveraged to its full potential is, in contrast to previous research, not solely attributable to practical factors but also due to the IAFs’ vast differences in approach to CA as a concept. Practical implications – The study has brought attention to the distinguished disparity found in internal auditors’ attitude towards CA. Ultimately, doubts, whether CA should be leveraged by IAFs has come to light. These are hurdles that need to be considered, both by academia, standard-setting groups, and business society if the leap for CA ought to continue. Originality/value – The use of semi-structured interviews contributes to in-depth understandings and insights of the internal auditors’ attitudes towards CA. Moreover, such an approach is more likely to capture the stance towards CA in greater detail than that possible of previous large-scale surveys.
30

Control of critical data flows : Automated monitoring of insurance data

Karlsson, Christoffer January 2016 (has links)
EU insurance companies work on implementing the Solvency II directive, which calls for stronger focus on data quality and information controls. Information controls are procedures that can validate data at rest and data in motion to detect errors and anomalies. In this master thesis a case study was carried out at AMF, a Swedish pension insurance company, to identify and investigate their critical data flows and the controls performed in the respective flows. A purpose of this project is to help AMF ensure data quality requirements from the Financial Supervisory Authority that they have to fulfill. The thesis was conducted at AMF between September and December 2015, and included tasks such as carrying out interviews, Enterprise Architecture modeling, analysis, prototyping, product evaluation and calculation of a business case.  A gap analysis was carried out to analyze the needs for change regarding existing information controls at AMF, where different states of the company are documented and analyzed. The current state corresponds to the present situation at the company including attributes to be improved while the future state outlines the target condition that the company wants to achieve. A gap between the current state and future state is identified and elements that make up the gap are presented in the gap description. Lastly, possible remedies for bridging the gap between the current and future state are presented.  Furthermore, a prototype of an automated control tool from a company called Infogix has been implemented and analyzed regarding usability, governance and cost.  A benefits evaluation was carried out on the information control tool to see whether an investment would be beneficial for AMF. The benefit evaluation was carried out using the PENG method, a Swedish model developed by three senior consultants that has been specially adjusted for evaluation of IT investments. The evaluation showed that such an investment would become beneficial during the second year after investment. / Försäkringsbolag i EU arbetar med införandet av Solvens II-direktivet som kräver att företag har ett större fokus på datakvalitet och informationskontroller. I detta examensarbete har en fältstudie utförts på AMF som är ett svenskt pensionsbolag. Arbetet har gått ut på att identifiera och undersöka kritiska dataflöden i företaget samt kontroller som utförs i dessa flöden. Ett syfte med arbetet var att hjälpa AMF att kunna påvisa att man uppfyller krav från finansinspektionen på datakvalitet och spårbarhet. Projektet utfördes under perioden september till december hösten 2015, vilket inkluderade arbetsuppgifter såsom intervjuer, Enterprise Architecture-modellering, implementering av prototyp, produktutvärdering samt kalkylering av ett business case.  En gap-analys har utförts för att analysera behovet av förändringar på de nuvarande informationskontrollerna som finns på AMF, där olika lägen har dokumenterats och analyserats. Nuläget motsvarar hur situationen ser ut på företaget i dagsläget och fokuserar på de attribut som man vill förbättra, medan önskat läge beskriver de mål som företaget vill uppnå. Ett gap mellan nuläge och önskat läge identifieras tillsammans med de faktorer som utgör skillnaden mellan dessa lägen presenteras. Till sist presenteras tänkbara åtgärder för att uppnå önskat läge. Som en del av detta examensarbete har en prototyp av ett automatiserat kontrollverktyg från ett företag som heter Infogix implementerats och utvärderas med avseende på användbarhet, styrning och kostnad. En nyttovärdering har utförts på kontrollverktyget för att undersöka huruvida en investering skulle vara gynnsam för AMF. Nyttovärderingen gjordes med hjälp av PENG, en svensk nyttovärderingsmodell utvecklad av tre ekonomer/IT-konsulter, som har anpassat speciellt för att bedöma IT-investeringar. Värderingen visade på att en sådan investering skulle komma att bli gynnsam under andra året efter att investeringen gjordes.

Page generated in 0.1052 seconds