• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 39
  • 29
  • 13
  • 6
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 294
  • 294
  • 141
  • 95
  • 88
  • 86
  • 78
  • 78
  • 66
  • 57
  • 48
  • 43
  • 40
  • 37
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Achieving Full Attack Coverage and Compiling Guidelines for enterpriseLang

Sadiq, Joshua, Hagelberg, Anton January 2021 (has links)
As the number of digital systems grows yearly, thereis a need for good cyber security. Lack of such security can beattributed to the demand on resources or even knowledge. To fillthis gap, tools such as enterpriseLang can be used by the enduserto find flaws within his system, which he can revise. Thisallows a user with inadequate knowledge of cyber security tocreate safer IT architecture. The authors of this paper took partin the development of enterpriseLang and its improvement. Thiswas done by suggesting improvements based on certain designguidelines, as well as attempting to achieve 100% attack coverageand improving the defense coverage.The results show a coverage increase of 0.6% for a specificmodel’s attack steps. Further more, we find that nearly 84.6%of the compiled guidelines are met, followed by 7.7% that werenot fully met and a similar amount that were non-applicable toenterpriseLang. As the language is still in development, thereremains much work that can improve it. A few suggestionswould be to increase the attack coverage by 100%, increasingthe defense coverage and improving enterpriseLang to fulfill thedesign guidelines, which would ultimately ease future projectswithin this domain. / Då antalet digitala system ständigt ökar göräven behovet för cybersäkerhet. Avsaknad av sådan säkerhet kanåläggas avsaknaden av kunskap och resurser. För att fylla dettagap utvecklas ständigt nya medel. Ett sådant är enterpriseLangsom kan användas av en utvecklare för att hitta säkerhetsbristeri sitt system. Detta tillåter en utvecklare med låg kunskap inomcybersäkerhet att utveckla säkrare system, applikationer och produkter.Skribenterna av denna avhandling tog del i utvecklingenoch förbättringen av enterpriseLang. Detta gjordes genom attföreslå förbättringar baserade på särskilda designriktlinjer ochett försök att uppnå 100 % täckning av attack-steg.Resultaten visar på en ökning av 0.6% anfallstäckning för enspecifik modell. Vidare visar de att 84.6 % av riktlinjerna äruppfyllda, 7.7% var inte uppfyllda och 7.7% var inte relevantaför enterpriseLang. Då språket ännu är i utvecklingsstadietfinns ännu mycket arbete kvar att göra. Några förslag är attöka anfallstäckningen till 100%, öka försvarstäckningen samtförbättra språket så den når upp till alla designriktlinjer. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
152

Generell och domänspecifik känsla av sammanhang: Deras relation till generell och domänspecifik subjektivt välbefinnande hos universitetsstudenter

Marjanovic, Stefan January 2023 (has links)
Både känsla av sammanhang (KASAM) och subjektivt välbefinnande (SVB) anses först och främst inbegripa generella förhållningssätt till livet. Forskningen gällande inflytelsen av domänspecifika skalor i jämförelse med generella skalor är därför begränsad. Studiens huvudsyfte var att undersöka om det fanns en skillnad i förklarad varians mellan generell respektive domänspecifik KASAM i förhållande till generell respektive domänspecifik SVB. I studien ingick 71 universitetsstudenter. Data samlades in via enkäter som bestod av sex skalor varav tre generella skalor som mätte KASAM, livstillfredsställelse samt känsloupplevelser, och tre modifikationer av dessa skalor där de riktades mot universitetet som domän. Resultatet visade att skolrelaterad KASAM adderade en unik förklarad varians utöver det som förklarades av generell KASAM i prediktionen av skolrelaterad SVB, men inte i prediktionen av generell SVB. Fyndet indikerade att den skolrelaterad KASAM-skalan fick tillgång till specifika faktorer som den generella KASAM-skalan inte omfattade, vilket ger incitament att vidare undersöka om användningen av domänspecifika KASAM-skalor kan vara av värde inom andra områden.
153

Evaluating and Improving Domain-Specific Programming Education: A Case Study with Cal Poly Chemistry Courses

Fuchs, Will 01 June 2022 (has links) (PDF)
Programming is a key skill in many domains outside computer science. When used judiciously, programming can empower people to accomplish what might be impossible or difficult with traditional methods. Unfortunately, students, especially non-CS majors, frequently have trouble while learning to program. This work reports on the challenges and opportunities faced by Physical Chemistry (PChem) students at Cal Poly, SLO as they learn to program in MATLAB. We assessed the PChem students through a multiple-choice concept inventory, as well as through “think-aloud” interviews. Additionally, we examined the students’ perceptions of and attitudes towards programming. We found that PChem students are adept at applying programming to a subset of problems, but their knowledge is fragile; like many intro CS students, they struggle to transfer their knowledge to different contexts and often express misconceptions about programming. However, they differ in that the PChem students are first and foremost Chemistry students, and so struggle to recognize appropriate applications of programming without scaffolding. Further, many students do not perceive themselves as competent general- purpose programmers. These factors combine to discourage students from applying programming to novel problems, even though it may be greatly beneficial to them. We leveraged this data to create a workshop with the goal of helping PChem students recognize their programming knowledge as a tool that they can apply to various contexts. This thesis presents a framework for addressing challenges and providing opportunities in domain-specific CS education.
154

Effective Programmatic Analysis of Network Flow Data for Security and Visualization using Higher-order Statistics and Domain Specific Embedded Languages

Conley, Thomas A. 20 July 2012 (has links)
No description available.
155

Validation DSL for client-server applications

Fedorenko, Vitalii M. 10 1900 (has links)
<p>Given the nature of client-server applications, most use some freeform interface, like web forms, to collect user input. The main difficulty with this approach is that all parameters obtained in this fashion need to be validated and normalized to protect the application from invalid entries. This is the problem addressed here: how to take client input and preprocess it before passing the data to a back-end, which concentrates on business logic. The method of implementation is a rule engine that uses Groovy internal domain-specific language (DSL) for specifying input requirements. We will justify why the DSL is a good fit for a validation rule engine, describe existing techniques used in this area and comprehensively address the related issues of accidental complexity, security, and user experience.</p> / Master of Science (MSc)
156

Test Automation for Grid-Based Multiagent Autonomous Systems

Entekhabi, Sina January 2024 (has links)
Traditional software testing usually comes with manual definitions of test cases. This manual process can be time-consuming, tedious, and incomplete in covering important but elusive corner cases that are hardly identifiable. Automatic generation of random test cases emerges as a strategy to mitigate the challenges associated with the manual test case design. However, the effectiveness of random test cases in fault detection may be limited, leading to increased testing costs, particularly in systems where test execution demands substantial resources and time. Leveraging the domain knowledge of test experts can guide the automatic random generation of test cases to more effective zones. In this thesis, we target quality assurance of multiagent autonomous systems and aim to automate test generation for them by applying the domain knowledge of test experts. To formalize the specification of the domain expert's knowledge, we introduce a small Domain Specific Language (DSL) for formal specification of particular locality-based constraints for grid-based multiagent systems. We initially employ this DSL for filtering randomly generated test inputs. Then, we evaluate the effectiveness of the generated test cases through an experiment on a case study of autonomous agents. Applying statistical analysis on the experiment results demonstrates that utilizing the domain knowledge to specify test selection criteria for filtering randomly generated test cases significantly reduces the number of potentially costly test executions to identify the persisting faults.  Domain knowledge of experts can also be utilized to directly generate test inputs with constraint solvers. We conduct a comprehensive study to compare the performance of filtering random cases and constraint-solving approaches in generating selective test cases across various test scenario parameters. The examination of these parameters provides criteria for determining the suitability of random data filtering versus constraint solving, considering the varying size and complexity of the test input generation constraint. To conduct our experiments, we use QuickCheck tool for random test data generation with filtering, and we employ Z3 for constraint solving. The findings, supported by observations and statistical analysis, reveal that test scenario parameters impact the performance of filtering and constraint-solving approaches differently. Specifically, the results indicate complementary strengths between the two approaches: random generation and filtering approach excels for the systems with a large number of agents and long agent paths but shows degradation in larger grid sizes and stricter constraints. Conversely, constraint solving approach demonstrates robust performance for large grid sizes and strict constraints but experiences degradation with increased agent numbers and longer paths. Our initially proposed DSL is limited in its features and is only capable of specifying particular locality-based constraints. To be able to specify more elaborate test scenarios, we extend that DSL based on a more intricate model of autonomous agents and their environment. Using the extended DSL, we can specify test oracles and test scenarios for a dynamic grid environment and agents having several attributes. To assess the extended DSL's utility, we design a questionnaire to gather opinions from several experts and also run an experiment to compare the efficiency of the extended DSL with the initially proposed one. The questionnaire results indicate that the extended DSL was successful in specifying several scenarios that the experts found more useful than the scenarios specified by the initial DSL. Moreover, the experimental results demonstrate that testing with the extended DSL can significantly reduce the number of test executions to detect system faults, leading to a more efficient testing process. / Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmart
157

Application of software engineering methodologies to the development of mathematical biological models

Gill, Mandeep Singh January 2013 (has links)
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
158

Collaborative Construction of Telecommunications Services. An Enterprise Architecture and Model Driven Engineering Method

CHIPRIANOV, Vanea 16 January 2012 (has links) (PDF)
In the context of world economies transitioning to services, telecommunications services are the primary means of communication between different economic entities and are therefore essential. The focus on the end consumer, the convergence with the Internet, the separation between the software and the hardware implementing a service, and the telecommunications market deregulation have led to a revolution and a new era in the telecommunications industry. To meet these challenges, former national telecommunications providers have to reduce the construction time, from months to days, while affecting non-negatively other parameters (e.g., cost, quality of service, quality of experience) of new telecommunications services. To tackle this broad theme, we propose a telecommunications service construction process, the software tools that are to be used in this process and a tool building process to build them. The telecommunications service construction process reflects current practices in the telecommunications industry. As such, it should be (easily) accepted by practitioners. The software tools (i.e., Domain Specific Modeling Languages designed as profiles of an Enterprise Architecture Modeling Language, graphical editors, code generators, Off the Shelf network simulators, a collaboration Design Rationale Domain Specific Modeling Language) help telecommunications providers face the challenges. The tool building process relies on models and provides a high automation degree, hence software tools can be build more rapidly. We illustrate the telecommunications service construction process and the tools using a multimedia conferencing service. Our proposals contribute to reducing the construction time of new telecommunications services, while providing the possibility of improved quality of service and increased involvement of the consumer. Faster provisioning of new telecommunications services, that better answer the consumers¿ needs, will increase the rate of development of new economic services in general, and will ultimately have a positive impact on world economic development.
159

實作時序性資料集的形狀查詢語言 / Implementation of a Shape Query Language for Time Series Datasets

劉家豪, Liu, Chia Hao Unknown Date (has links)
越來越多帶有時間序列的資料普遍的存在醫學工程、商業統計、財務金融等各領域,例如:在財務金融分析領域中已知的形狀樣式用以預測未來價格趨勢做出買賣的決策。由於時序性資料通常非常的龐大,領域的專家看法也未必相同,所描述出新的形狀樣式剛開始也都是比較粗略的,必須透過不斷的修正才會得到比較精準的結果。有鑒於此,我們實做了一套時序性資料集的形狀查詢語言,透過簡單的語言描述,讓使用者簡便快速的定義出屬於自己的形狀樣式。此外我們也實作出互動式的環境並實際有效率應用於台灣證券交易市場。 / There are more and more time series data in the fields of medical engineering, commerce statistics, finance, etc. For example, in financial analysis, we can forecast the price trends by using some well known chart patterns. People want to find out some new patterns for making their purchase decisions fast and easily. However, it is technical challenging to implement a high-level pattern description language. This thesis implemented a shape query language for time-series datasets. Through the simple syntax, field users can find out there own shape patterns by using a more realistic, easily and fast way. We have also developed an interactive environment that users can apply our shape query language to the data of Taiwan Stock Market efficiently.
160

Compilation efficace d'applications de traitement d'images pour processeurs manycore / Efficient Compilation of Image Processing Applications for Manycore Processors

Guillou, Pierre 30 November 2016 (has links)
Nous assistons à une explosion du nombre d’appareils mobiles équipés de capteurs optiques : smartphones, tablettes, drones... préfigurent un Internet des objets imminent. De nouvelles applications de traitement d’images (filtres, compression, réalité augmentée) exploitent ces capteurs mais doivent répondre à des contraintes fortes de vitesse et d’efficacité énergétique. Les architectures modernes — processeurs manycore, GPUs,... — offrent un potentiel de performance, avec cependant une hausse sensible de la complexité de programmation.L’ambition de cette thèse est de vérifier l’adéquation entre le domaine du traitement d’images et ces architectures modernes : concilier programmabilité, portabilité et performance reste encore aujourd’hui un défi. Le domaine du traitement d’images présente un fort parallélisme intrinsèque, qui peut potentiellement être exploité par les différents niveaux de parallélisme offerts par les architectures actuelles. Nous nous focalisons ici sur le domaine du traitement d’images par morphologie mathématique, et validons notre approche avec l’architecture manycore du processeur MPPA de la société Kalray.Nous prouvons d’abord la faisabilité de chaînes de compilation intégrées, composées de compilateurs, bibliothèques et d’environnements d’exécution, qui à partir de langages de haut niveau tirent parti de différents accélérateurs matériels. Nous nous concentrons plus particulièrement sur les processeurs manycore, suivant les différents modèles de programmation : OpenMP ; langage flot de données ; OpenCL ; passage de messages. Trois chaînes de compilation sur quatre ont été réalisées, et sont accessibles à des applications écrites dans des langages spécifiques au domaine du traitement d’images intégrés à Python ou C. Elles améliorent grandement la portabilité de ces applications, désormais exécutables sur un plus large panel d’architectures cibles.Ces chaînes de compilation nous ont ensuite permis de réaliser des expériences comparatives sur un jeu de sept applications de traitement d’images. Nous montrons que le processeur MPPA est en moyenne plus efficace énergétiquement qu’un ensemble d’accélérateurs matériels concurrents, et ceci particulièrement avec le modèle de programmation flot de données. Nous montrons que la compilation d’un langage spécifique intégré à Python vers un langage spécifique intégré à C permet d’augmenter la portabilité et d’améliorer les performances des applications écrites en Python.Nos chaînes de compilation forment enfin un environnement logiciel complet dédié au développement d’applications de traitement d’images par morphologie mathématique, capable de cibler efficacement différentes architectures matérielles, dont le processeur MPPA, et proposant des interfaces dans des langages de haut niveau. / Many mobile devices now integrate optic sensors; smartphones, tablets, drones... are foreshadowing an impending Internet of Things (IoT). New image processing applications (filters, compression, augmented reality) are taking advantage of these sensors under strong constraints of speed and energy efficiency. Modern architectures, such as manycore processors or GPUs, offer good performance, but are hard to program.This thesis aims at checking the adequacy between the image processing domain and these modern architectures: conciliating programmability, portability and performance is still a challenge today. Typical image processing applications feature strong, inherent parallelism, which can potentially be exploited by the various levels of hardware parallelism inside current architectures. We focus here on image processing based on mathematical morphology, and validate our approach using the manycore architecture of the Kalray MPPA processor.We first prove that integrated compilation chains, composed of compilers, libraries and run-time systems, allow to take advantage of various hardware accelerators from high-level languages. We especially focus on manycore processors, through various programming models: OpenMP, data-flow language, OpenCL, and message passing. Three out of four compilation chains have been developed, and are available to applications written in domain-specific languages (DSL) embedded in C or Python. They greatly improve the portability of applications, which can now be executed on a large panel of target architectures.Then, these compilation chains have allowed us to perform comparative experiments on a set of seven image processing applications. We show that the MPPA processor is on average more energy-efficient than competing hardware accelerators, especially with the data-flow programming model. We show that compiling a DSL embedded in Python to a DSL embedded in C increases both the portability and the performance of Python-written applications.Thus, our compilation chains form a complete software environment dedicated to image processing application development. This environment is able to efficiently target several hardware architectures, among them the MPPA processor, and offers interfaces in high-level languages.

Page generated in 0.0376 seconds