• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1242
  • 668
  • 2
  • Tagged with
  • 1914
  • 1910
  • 1908
  • 195
  • 192
  • 180
  • 172
  • 156
  • 135
  • 130
  • 121
  • 104
  • 93
  • 92
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Efficient symbolic state exploration of timed systems : Theory and implementation

Bengtsson, Johan January 2001 (has links)
Timing aspects are important for the correctness of safety-critical systems. It is crucial that these aspects are carefully analysed in designing such systems. UPPAAL is a tool designed to automate the analysis process. In UPPAAL, a system under construction is described as a network of timed automata and the desired properties of the system can be specified using a query language. Then UPPAAL can be used to explore the state space of the system description to search for states violating (or satisfying) the properties. If such states are found, the tool provides diagnostic information, in form of executions leading to the states, to help the desginers, for example, to locate bugs in the design. The major problem for UPPAAL and other tools for timed systems to deal with industrial-size applications is the state space explosion. This thesis studies the sources of the problem and develops techniques for real-time model checkers, such as UPPAAL, to attack the problem. As contributions, we have developed the notion of committed locations to model atomicity and local-time semantics for timed systems to allow partial order reductions, and a number of implementation techniques to reduce time and space consumption in state space exploration. The techniques are studied and compared by case studies. Our experiments demonstrate significant improvements on the performance of UPPAAL.
152

Efficient synchronization and coherence for nonuniform communication architectures

Radović, Zoran January 2003 (has links)
Nonuniformity is a common characteristic of contemporary computer systems, mainly because of physical distances in computer designs. In large multiprocessors, the access to shared memory is often nonuniform, and may vary as much as ten times for some nonuniform memory access (NUMA) architectures, depending on if the memory is close to the requesting processor or not. Much research has been devoted to optimizing such systems. This thesis identifies another important property of computer designs, nonuniform communication architecture (NUCA). High-end hardware-coherent machines built from a few large nodes or from chip multiprocessors, are typical NUCA systems that have a lower penalty for reading recently written data from a neighbor's cache than from a remote cache. The first part of the thesis identifies node affinity as an important property for scalable general-purpose locks. Several software-based hierarchical lock implementations that exploit NUCAs are presented and investigated. This type of lock is shown to be almost twice as fast for contended locks compared with other software-based lock implementations, without introducing significant overhead for uncontested locks. Physical distance in very large systems also limits hardware coherence to a subsection of the system. Software implementations of distributed shared memory (DSM) are cost-effective solutions that extend the upper scalability limit of such machines by providing the "illusion" of shared memory across the entire system. This also creates NUCAs with even larger local-remote penalties, since the coherence is maintained entirely in software. The major source of inefficiency for traditional software DSM implementations comes from the cost of interrupt-based asynchronous protocol processing, not from the actual network latency. As the raw hardware latency of internode communication decreases, the asynchronous overhead in the communication becomes more dominant. This thesis introduces the DSZOOM system that removes this type of overhead by running the entire coherence protocol in the requesting processor.
153

Methods for run time analysis of data locality

Berg, Erik January 2003 (has links)
The growing gap between processor clock speed and DRAM access time puts new demands on software and development tools. Deep memory hierarchies and high cache miss penalties in present and emerging computer systems make execution time sensitive to data locality. Therefore, developers of performance-critical applications and optimizing compilers must be aware of data locality and maximize cache utilization to produce fast code. To aid the optimization process and help understanding data locality, we need methods to analyze programs and pinpoint poor cache utilization and possible optimization opportunities. Current methods for run-time analysis of data locality and cache behavior include functional cache simulation, often combined with set sampling or time sampling, other regularity metrics based on strides and data streams, and hardware monitoring. However, they all share the trade-off between run-time overhead, accuracy and explanatory power. This thesis presents methods to efficiently analyze data locality at run time based on cache modeling. It suggests source-interdependence profiling as a technique for examining the cache behavior of applications and locating source code statements and/or data structures that cause poor cache utilization. The thesis also introduces a novel statistical cache-modeling technique, StatCache. Rather than implementing a functional cache simulator, StatCache estimates the miss ratios of fully-associative caches using probability theory. A major advantage of the method is that the miss ratio estimates can be based on very sparse sampling. Further, a single run of an application is enough to estimate the miss ratio of caches of arbitrary sizes and line sizes and to study both spatial and temporal data locality.
154

Cache memory design trade-offs for current and emerging workloads

Karlsson, Martin January 2003 (has links)
The memory system is the key to performance in contemporary computer systems. When designing a new memory system, architectural decisions are often arbitrated based on their expected performance effect. It is therefore very important to make performance estimates based on workloads that accurately reflect the future use of the system. This thesis presents the first memory system characterization study of Java-based middleware, which is an emerging workload likely to be an important design consideration for next generation processors and servers. Manufacturing technology has reached a point where it is now possible to fit multiple full-scale processors and integrate board-level features on a chip. The raised competition for chip resources has increased the need to design more effective caches without trading off area or power. Two common ways to improve cache performance is to increase the size or associativity of the cache. Both of these approaches come at a high cost in chip area as well as power. This thesis presents two new cache organizations, each aimed at more efficient use of either power or area. First, the Elbow cache is presented, which is shown to be a power-efficient alternative to highly set-associative caches. Secondly, a selective cache allocation algorithm is presented, RASCAL, that significantly reduces the miss ratio at a limited cost in area.
155

Exploiting data locality in adaptive architectures

Wallin, Dan January 2003 (has links)
The speed of processors increases much faster than the memory access time. This makes memory accesses expensive. To meet this problem, cache hierarchies are introduced to serve the processor with data. However, the effectiveness of caches depends on the amount of locality in the application's memory access pattern. The behavior of various programs differs greatly in terms of cache miss characteristics, access patterns and communication intensity. Therefore a computer built for many different computational tasks potentially benefits from dynamically adapting to the varying needs of the applications. This thesis shows that a cc-NUMA multiprocessor with data migration and replication optimizations efficiently exploits the temporal locality of algorithms. The performance of the self-optimizing system is similar to a system with a perfect initial thread and data placement. Data locality optimizations are not for free. Large cache line coherence protocols improve spatial locality but yield increases in false sharing misses for many applications. Prefetching techniques that reduce the cache misses often lead to increased address and data traffic. Several techniques introduced in this thesis efficiently avoid these drawbacks. The bundling technique reduces the coherence traffic in multiprocessor prefetchers. This is especially important in snoop-based systems where the coherence bandwidth is a scarce resource. Bundled prefetchers manage to reduce both the cache miss rate and the coherence traffic compared with non-prefetching protocols. The most efficient bundled prefetching protocol studied, lowers the cache misses by 27 percent and the address snoops by 24 percent relative to a non-prefetching protocol on average for all examined applications. Another proposed technique, capacity prefetching, avoids false sharing misses by distinguishing between cache lines involved in communication from non-communicating cache lines at run-time.
156

On-chip monitoring for non-intrusive hardware/software observability

El Shobaki, Mohammed January 2004 (has links)
The increased complexity in today's state-of-the-art computer systems make them hard to analyse, test, and debug. Moreover, the advances in hardware technology give system designers enormous possibilities to explore hardware as a means to implement performance demanding functionality. We see examples of this trend in novel microprocessors, and Systems-on-Chip, that comprise reconfigurable logic allowing for hardware/software co-design. To succeed in developing computer systems based on these premises, it is paramount to have efficient design tools and methods. An important aspect in the development process is observability, i.e., the ability to observe the system's behaviour at various levels of detail. These observations are required for many applications: when looking for design errors, during debugging, during performance assessments and fine-tuning of algorithms, for extraction of design data, and a lot more. In real-time systems, and computers that allow for concurrent process execution, the observability must be obtained without compromising the system's functional and timing behaviour. In this thesis we propose a monitoring system that can be used for nonintrusive run-time observations of real-time and concurrent computer systems. The monitoring system, designated Multipurpose/Multiprocessor Application Monitor (MAMon), is based on a hardware probe unit (IPU) which is integrated with the observed system s hardware. The IPU collects process-level events from a hardware Real-Time Kernel (RTK), without perturbing the system, and transfers the events to an external computer for analysis, debugging, and visualisation. Moreover, the MAMon concept also features hybrid monitoring for collection of more fine-grained information, such as program instructions and data flows. We describe MAMon s architecture, the implementation of two hardware prototypes, and validation of the prototypes in different case-studies. The main conclusion is that process level events can be traced non-intrusively by integrating the IPU with a hardware RTK. Furthermore, the IPU's small footprint makes it attractive for SoC designs, as it provides increased system observability at a low hardware cost.
157

Hardware–Software Tradeoffs in Shared-Memory Implementations

Zeffer, Håkan January 2005 (has links)
Shared-memory architectures represent a class of parallel computer systems commonly used in the commercial and technical market. While shared-memory servers typically come in a large variety of configurations and sizes, the advance in semiconductor technology have set the trend towards multiple cores per die and multiple threads per core. Software-based distributed shared-memory proposals were given much attention in the 90s. But their promise of short time to market and low cost could not make up for their unstable performance. Hence, these systems seldom made it to the market. However, with the trend towards chip multiprocessors, multiple hardware threads per core and increased cost of connecting multiple chips together to form large-scale machines, software coherence in one form or another might be a good intra-chip coherence solution. This thesis shows that data locality, software flexibility and minimal processor support for read and write coherence traps can offer good performance, while removing the hard limit of scalability. Our aggressive fine-grained software-only distributed shared-memory system exploits key application properties, such as locality and sharing patterns, to outperform a hardware-only machine on some benchmarks. On average, the software system is 11 percent slower than the hardware system when run on identical node and interconnect hardware. A detailed full-system simulation study of dual core CMPs, with multiple hardware threads per core and minimal processor support for coherence traps is on average one percent slower than its hardware-only counterpart when some flexibility is taken into account. Finally, a functional full-system simulation study of an adaptive coherence-batching scheme shows that the number of coherence misses can be reduced with up to 60 percent and bandwidth consumption reduced with up to 22 percent for both commercial and scientific applications.
158

Utvärdering av SNMP-baserade övervakningssystem för feldetektering / Evalution of SNMP-based monitoring systems for error detection

Palme, Kenneth January 2016 (has links)
The aim of the present work was to find the infrastructure monitoring system most in line with the needs and requirements of the company Optinova. Different protocols used for monitoring infrastructure are discussed in the report. Emphasis is placed on Simple Network Management Protocol (SNMP), but other protocols are also discussed, such as Internet Control Message Protocol (ICMP) and Windows Management Instrumentation (WMI). This report describes the work carried out to find a monitoring system that can be installed on a Windows-based server, and the ability to monitor the software used for backup (ArcServe Backup D2D). Several systems available on the market were processed and two of those which appeared most suitable were chosen for the evaluation: PRTG and op5. The evaluation was according to a model limiting the infrastructure to the most critical parts. It was concluded that both systems are appropriate for Optinova, and that certain features work better in PRTG and other work better in op5.
159

Utveckling av webbapplikationen Candydat / Development of the Web Application Candydat

Bengtsson, Albin, Björck, Fredrik, Drugge, Oskar, Lundquist, Sebastian, Olofsson, Emelie, Scheutz Godin, Anton, Yuen, Lisa, Åhlén, Viktor January 2016 (has links)
This report describes how development of an online retailer of custom designed laptop cases can be implemented. The report also aims to contribute to the field of developing a web application with a design tool. The thesis is based on the question “How can one develop and give high usability to a web application with the purpose of selling personalized laptop cases to an end customer?” and has the vision that people who want to add a personal touch to everyday life will find the e-shop. According to the market analysis in appendix 1 there is an unfulfilled need on the market for personalized laptop cases. Therefore, development in the area would be meaningful. The report accounts for the technical methods which can be used, and in what ways they might impact the competitiveness of the e-shop. During the development process of the web application, great attention has been directed towards design strategies which creates confidence in the potential customer. This is something which, according to the theory presented, is a crucial factor in order for the customer to complete a purchase. In addition to this it is noted in the theory that it is vital for an e-shop to have short loading times, something which has been an important factor in the development of the application. The conclusion is that a web application where the main purpose is the sale of customized products can be developed using the agile software development method scrum and the other methods that is discussed throughout the project. To develop a design tool for laptop cases was a big challenge where minimalistic design was constantly weighed against an increased functionality for the user. / Denna rapport utreder hur utvecklingen av en e-butik som erbjuder personligt designade laptopfodral kan genomföras. Rapporten har som mål att bidra till utvecklingsområdet för skapandet av webbapplikationer med designelement. Rapporten utgår från frågeställningen ”Hur kan en webbapplikation med syfte att sälja personliga laptopfodral till en slutkund utformas för att uppnå hög användbarhet?” och har som vision att personer som vill sätta en personlig prägel på vardagen ska hitta till e-butiken. Det finns i nuläget få möjligheter att som en köpare designa och sätta en personlig prägel på laptopfodral. Enligt marknadsanalysen i Bilaga 1 finns en icke-uppfylld efterfrågan för sådana personligt anpassade laptopfodral och därför anses utveckling av en webbapplikation som tillgodoser detta behov meningsfull. Rapporten redogör för de tekniska metoder som kan användas och vilken inverkan dessa har på e-butikens konkurrenskraft. Vid utvecklingen av webbapplikationen har en stor vikt lagts vid designstrategier som skapar användbarhet för den potentiella användaren. Detta är något, enligt den teori som presenteras, som är en avgörande faktor för att kunden skall genomföra ett köp. Utöver detta påpekas det i teorin att det är vitalt för e-butiken att ha korta laddningstider, vilket har varit en viktig faktor vid utveckling av webbapplikationen. Slutsatsen blir att en webbapplikation vars huvudsyfte är försäljning av personliga produkter kan utvecklas agilt med scrum och de metoder som använts i projektet. Att utveckla ett designverktyg för laptopfodral var en stor utmaning med en ständig avvägning mellan en minimalistisk design och en utökad funktionalitet för användaren.
160

Lösenordsmönster : Att förebygga svaga lösenord

Crossley, Mark, Lindell, Joakim January 2015 (has links)
Passwords are used more now than ever before. Their use is based on the ideathat the password is only known to the user and that its secrecy prevents othersfrom accessing potentially valuable or sensitive information. But how secret isa password in today's high tech world? Passwords are generally converted into hashsums and saved in databases. Cracking a password requires that the process is reversed so that the actual password can be derived from the hash sum. This cracking process can beachieved by two methods. An attacker can test all the possible combinations,(brute force cracking) or the attacker can compare the password with a list ofcommonly used passwords (cracking with wordlists). This paper investigates a passwords vulnerability to both brute force crackingand cracking via wordlists. It uses a modern computer's processing speedsto establish the amount of time to crack a certain password via brute forcecracking. It also deploys state of the art techniques to examine a password'scontent. It analyses three databases from dierent online communities to examineany possible correlation between a user's hobby interest and their choiceof password. This paper finds that the majority of passwords won't remain secret for very long. Short passwords which consist of a small alphabet are particularly vulnerable to brute force attacks. However due to the increasing speed of modern computers even passwords which are twelve characters long are still potentially vulnerable. This paper finds that users from a variety of online communities choose common passwords which are likely to be on a wordlist and thus susceptible to cracking via word list attacks. This paper provides suggestions on how a user can choose a stronger password. / Losenord anvands allt mer frekvent i och med digitaliseringens utspridning.Anvandingsomradet bygger pa ideen att ett losenord ar kant endast for enanvandare och att denna hemlighet forhindrar andra fran att kommaat vardefulleller kanslig information. Men hur hemligt ar ett losenord i dagens hogteknologiskavarld? Losenord ar typiskt sett beraknade till hashsummor och lagrade i databaser.Att knacka en losenordshash gors typiskt sett genom tva metoder. Antingengenom att en angripare provar samtliga mojliga losenord upp till och med enviss angiven teckenlangd, sa kallad brute force knackning. Det andra alternativetar genom att prioritera vissa losenord som bedoms sannolika; en ordlistattack. Detta arbete undersoker vissa sarbarheter hos ett losenord gentemot badebrute force knackning och ordlistattacker. Det ar begransat till den processorkraften genomsnittlig persondator kan tankas inneha. Arbetet utnyttjar metodersom anses state of the art i att analysera ett losenords uppbyggnad. Detanalyserar tre databasdumpar fran olika communities pa internet, for att undersoka eventuella samband mellan anvandares intressen och deras losenord. Arbetet finner att majoriteten av losenord inte kommer att vara hemliga alltfor lange. Korta losenord ar sarskilt sarbara for brute force knackning. Okningen i prestanda gor aven att losenord upp till tolv tecken kan vara obekvamt sarbara. Det visas aven att det nns god anledning att gora fortsatta studier pa ordlistattackerbaserade runtomkring anvandarens intresseomraden. Avslutningsvis ges rad pa procedur for att oka losenordsstyrkan.

Page generated in 0.0396 seconds