• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 5
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 9
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Enhancing Data Processing on Clouds with Hadoop/HBase

Zhang, Chen January 2011 (has links)
In the current information age, large amounts of data are being generated and accumulated rapidly in various industrial and scientific domains. This imposes important demands on data processing capabilities that can extract sensible and valuable information from the large amount of data in a timely manner. Hadoop, the open source implementation of Google's data processing framework (MapReduce, Google File System and BigTable), is becoming increasingly popular and being used to solve data processing problems in various application scenarios. However, being originally designed for handling very large data sets that can be divided easily in parts to be processed independently with limited inter-task communication, Hadoop lacks applicability to a wider usage case. As a result, many projects are under way to enhance Hadoop for different application needs, such as data warehouse applications, machine learning and data mining applications, etc. This thesis is one such research effort in this direction. The goal of the thesis research is to design novel tools and techniques to extend and enhance the large-scale data processing capability of Hadoop/HBase on clouds, and to evaluate their effectiveness in performance tests on prototype implementations. Two main research contributions are described. The first contribution is a light-weight computational workflow system called "CloudWF" for Hadoop. The second contribution is a client library called "HBaseSI" supporting transactional snapshot isolation (SI) in HBase, Hadoop's database component. CloudWF addresses the problem of automating the execution of scientific workflows composed of both MapReduce and legacy applications on clouds with Hadoop/HBase. CloudWF is the first computational workflow system built directly using Hadoop/HBase. It uses novel methods in handling workflow directed acyclic graph decomposition, storing and querying dependencies in HBase sparse tables, transparent file staging, and decentralized workflow execution management relying on the MapReduce framework for task scheduling and fault tolerance. HBaseSI addresses the problem of maintaining strong transactional data consistency in HBase tables. This is the first SI mechanism developed for HBase. HBaseSI uses novel methods in handling distributed transactional management autonomously by individual clients. These methods greatly simplify the design of HBaseSI and can be generalized to other column-oriented stores with similar architecture as HBase. As a result of the simplicity in design, HBaseSI adds low overhead to HBase performance and directly inherits many desirable properties of HBase. HBaseSI is non-intrusive to existing HBase installations and user data, and is designed to work with a large cloud in terms of data size and the number of nodes in the cloud.
22

Compressive Spectral and Coherence Imaging

Wagadarikar, Ashwin Ashok January 2010 (has links)
<p>This dissertation describes two computational sensors that were used to demonstrate applications of generalized sampling of the optical field. The first sensor was an incoherent imaging system designed for compressive measurement of the power spectral density in the scene (spectral imaging). The other sensor was an interferometer used to compressively measure the mutual intensity of the optical field (coherence imaging) for imaging through turbulence. Each sensor made anisomorphic measurements of the optical signal of interest and digital post-processing of these measurements was required to recover the signal. The optical hardware and post-processing software were co-designed to permit acquisition of the signal of interest with sub-Nyquist rate sampling, given the prior information that the signal is sparse or compressible in some basis.</p> <p>Compressive spectral imaging was achieved by a coded aperture snapshot spectral imager (CASSI), which used a coded aperture and a dispersive element to modulate the optical field and capture a 2D projection of the 3D spectral image of the scene in a snapshot. Prior information of the scene, such as piecewise smoothness of objects in the scene, could be enforced by numerical estimation algorithms to recover an estimate of the spectral image from the snapshot measurement.</p> <p>Hypothesizing that turbulence between the scene and CASSI would introduce spectral diversity of the point spread function, CASSI's snapshot spectral imaging capability could be used to image objects in the scene through the turbulence. However, no turbulence-induced spectral diversity of the point spread function was observed experimentally. Thus, coherence functions, which are multi-dimensional functions that completely determine optical fields observed by intensity detectors, were considered. These functions have previously been used to image through turbulence after extensive and time-consuming sampling of such functions. Thus, compressive coherence imaging was attempted as an alternative means of imaging through turbulence.</p> <p>Compressive coherence imaging was demonstrated by using a rotational shear interferometer to measure just a 2D subset of the 4D mutual intensity, a coherence function that captures the optical field correlation between all the pairs of points in the aperture. By imposing a sparsity constraint on the possible distribution of objects in the scene, both the object distribution and the isoplanatic phase distortion induced by the turbulence could be estimated with the small number of measurements made by the interferometer.</p> / Dissertation
23

Ad Hoc Networks Measurement Model and Methods Based on Network Tomography

Yao, Ye 08 July 2011 (has links) (PDF)
The measurability of Mobile ad hoc network (MANET) is the precondition of itsmanagement, performance optimization and network resources re-allocations. However, MANET is an infrastructure-free, multi-hop, andself-organized temporary network, comprised of a group of mobile nodes with wirelesscommunication devices. Not only does its topology structure vary with time going by, butalso the communication protocol used in its network layer or data link layer is diverse andnon-standard.In order to solve the problem of interior links performance (such as packet loss rate anddelay) measurement in MANET, this thesis has adopted an external measurement basedon network tomography (NT). To the best of our knowledge, NT technique is adaptable for Ad Hoc networkmeasurement.This thesis has deeply studied MANET measurement technique based on NT. The maincontributions are:(1) An analysis technique on MANET topology dynamic characteristic based onmobility model was proposed. At first, an Ad Hoc network mobility model formalizationis described. Then a MANET topology snapshots capturing method was proposed to findand verify that MANET topology varies in steady and non-steady state in turnperiodically. At the same time, it was proved that it was practicable in theory to introduceNT technique into Ad Hoc network measurement. The fitness hypothesis verification wasadopted to obtain the rule of Ad Hoc network topology dynamic characteristic parameters,and the Markov stochastic process was adopted to analyze MANET topology dynamiccharacteristic. The simulation results show that the method above not only is valid andgenerable to be used for all mobility models in NS-2 Tool, but also could obtain thetopology state keeping experimental formula and topology state varying probabilityformula.IV(2) An analysis technique for MANET topology dynamic characteristic based onmeasurement sample was proposed. When the scenario file of mobile models could notbe obtained beforehand, End-to-End measurement was used in MANET to obtain thepath delay time. Then topology steady period of MANET is inferred by judging whetherpath delay dithering is close to zero. At the same time, the MANET topology wasidentified by using hierarchical clustering method based on measurement sample of pathperformance during topology steady period in order to support the link performanceinference. The simulation result verified that the method above could not only detect themeasurement window time of MANET effectively, but also identify the MANETtopology architecture during measurement window time correctly.(3) A MANET link performance inference algorithm based on linear analysis modelwas proposed. The relation of inequality between link and path performance, such as lossrate of MANET, was deduced according to a linear model. The phenomena thatcommunication characteristic of packets, such as delay and loss rate, is more similarwhen the sub-paths has longer shared links was proved in the document. When the rankof the routing matrix is equal to that of its augmentation matrix, the linear model wasused to describe the Ad Hoc network link performance inference method. The simulationresults show that the algorithm not only is effective, but also has short computing time.(4) A Link performance inference algorithm based on multi-objectives optimizationwas proposed. When the rank of the routing matrix is not equal to that of its augmentationmatrix, the link performance inference was changed into multi-objectives optimizationand genetic algorithm is used to infer link performance. The probability distribution oflink performance in certain time t was obtained by performing more measurements andstatistically analyzing the hypo-solutions. Through the simulation, it can be safelyconcluded that the internal link performance, such as, link loss ratio and link delay, can beinferred correctly when the rank of the routing matrix is not equal to that of itsaugmentation matrix.
24

Thinking with photographs at the margins of Antarctic exploration

McCarthy, Kerry Bridgett January 2011 (has links)
This thesis seeks a portable and accessible model for centralising photographs in enquiry. I argue that photographs are potent sites of human value making but are typically relegated to illustrating word-based considerations, while the vast mass of ‘ordinary’ photographs are excluded from even this function. The context in which I develop and test the model is the heroic era of Antarctic exploration, a time and place that is dominated by an entrenched mythology, and where photographs have been assigned a merely pictorial role. In seeking to reactivate these objects and pictures I turn to Elizabeth Edwards’ notion of using photographs to think with, tracing the evolution of this idea through generations of thinking about photography, and emphasising recent writers such as Geoffrey Batchen, Margaret Olin and Joan Schwartz. My work confirms a resonance with Edwards’ thinking but also a need to emphasise photographic materiality and the photographic collective. Further, I demonstrate that this thinking also resonates with the work of Walter Benjamin and Roland Barthes, confirming a construction of photographs as generative anchoring points in networks of identification that are both culturalised and subjective. My model for thinking with photographs draws in Kenneth Burke’s pentad of dramatistic analysis, arguing a productive fit with his concern to filter the rhetorical detritus of human behaviour as an entrée to viewing core motivations. The pentad has not previously been used to think with photographs but it is able to be deployed successfully for this purpose by refreshing its operation in line with writers such as Robert Cathcart, James Chesebro and Gregory Clark. For Antarctica, thinking with photographs involves negotiating margins – depicted, physical, temporal and ideological, and in addressing the photographic mass this thesis argues a reactivation of margins as points of insight rather than barriers of exclusion. Recent writers such as Francis Spufford, Stephen Pyne, John Wylie and Kathryn Yusoff have found new ways to construct the performance of Antarctic exploration, and, in this spirit, the thesis enacts Burke’s pentad to think with the photograph collection of ‘second tier’ Antarctic explorer, Ernest Joyce. It shows Antarctic exploration to be also an intensely personal experience, with the power to overhaul mindsets but offering no guarantee that new expectations can be delivered on. In Joyce’s photographs it finds a nexus of contested narratives and contested photographies, and the seeds of a Benjaminian modernity that speak of the personal implications of the dissolution of meta-narratives.
25

Enhancing Data Processing on Clouds with Hadoop/HBase

Zhang, Chen January 2011 (has links)
In the current information age, large amounts of data are being generated and accumulated rapidly in various industrial and scientific domains. This imposes important demands on data processing capabilities that can extract sensible and valuable information from the large amount of data in a timely manner. Hadoop, the open source implementation of Google's data processing framework (MapReduce, Google File System and BigTable), is becoming increasingly popular and being used to solve data processing problems in various application scenarios. However, being originally designed for handling very large data sets that can be divided easily in parts to be processed independently with limited inter-task communication, Hadoop lacks applicability to a wider usage case. As a result, many projects are under way to enhance Hadoop for different application needs, such as data warehouse applications, machine learning and data mining applications, etc. This thesis is one such research effort in this direction. The goal of the thesis research is to design novel tools and techniques to extend and enhance the large-scale data processing capability of Hadoop/HBase on clouds, and to evaluate their effectiveness in performance tests on prototype implementations. Two main research contributions are described. The first contribution is a light-weight computational workflow system called "CloudWF" for Hadoop. The second contribution is a client library called "HBaseSI" supporting transactional snapshot isolation (SI) in HBase, Hadoop's database component. CloudWF addresses the problem of automating the execution of scientific workflows composed of both MapReduce and legacy applications on clouds with Hadoop/HBase. CloudWF is the first computational workflow system built directly using Hadoop/HBase. It uses novel methods in handling workflow directed acyclic graph decomposition, storing and querying dependencies in HBase sparse tables, transparent file staging, and decentralized workflow execution management relying on the MapReduce framework for task scheduling and fault tolerance. HBaseSI addresses the problem of maintaining strong transactional data consistency in HBase tables. This is the first SI mechanism developed for HBase. HBaseSI uses novel methods in handling distributed transactional management autonomously by individual clients. These methods greatly simplify the design of HBaseSI and can be generalized to other column-oriented stores with similar architecture as HBase. As a result of the simplicity in design, HBaseSI adds low overhead to HBase performance and directly inherits many desirable properties of HBase. HBaseSI is non-intrusive to existing HBase installations and user data, and is designed to work with a large cloud in terms of data size and the number of nodes in the cloud.
26

Návrh zlepšení řízení obalového materiálu ve vybraném podniku / The Proposal of Management Improvement of Packaging Material in a Selected Company

Glonek, Andrej January 2021 (has links)
This diploma thesis deals with packaging materials in company Frauenthal Automotive Hustopeče, s. r. o., specifically proposals to improve the storage, flow and use of packaging material with limited storage capacity. The first part contains the theoretical basis of the work. The second part includes a presentation of the company and an analysis of the current situation. In the last part, own solutions to current shortcomings in the company are proposed.
27

Re:Visions : A Mother's Secondary Images

Shanks, Sarah M. January 2014 (has links)
No description available.
28

DynaCut: A Framework for Dynamic Code Customization

Mahurkar, Abhijit 03 September 2021 (has links)
Software systems are becoming increasingly bloated to accommodate a wide array of features, platforms and users. This results not only in wastage of memory but also in an increase in their attack surface. Existing works broadly use binary-rewriting techniques to remove unused code, but this results in a binary that is highly customized for a given usage context. If the usage scenario of the binary changes, the binary has to be regenerated. We present DYNACUT– a framework for Dynamic and Adaptive Code Customization. DYNACUT provides the user with the capability to customize the application to changing usage scenarios at runtime without the need for the source code. DYNACUT achieves this customization by leveraging two techniques: 1) identifying the code to be removed by using execution traces of the application and 2) by rewriting the process dynamically. The first technique uses traces of the wanted features and the unwanted features of the application and generates their diffs to identify the features to be removed. The second technique modifies the process image to add traps and fault-handling code to remove vulnerable but unused code. DYNACUT can also disable temporally unused code – code that is used only during the initialization phase of the application. To demonstrate its effectiveness, we built a prototype of DYNACUT and evaluated it on 9 real-world applications including NGINX, Lighttpd and 7 applications of the SPEC Intspeed benchmark suite. DYNACUT removes upto 56% of executed basic blocks and upto 10% of the application code when used to remove initialization code. The total overhead is in the range of 1.63 seconds for Lighttpd, 4.83 seconds for NGINX and about 39 seconds for perlbench in the SPEC suite. / Master of Science / Software systems are becoming increasingly bloated to accommodate a wide array of users, features and platforms. This results in the software not only occupying extra space on com- puting platforms but also in an increase in the ways that the applications can be exploited by hackers. Current works broadly use a variety of techniques to identify and remove this type of vulnerable and unused code. But, these approaches result in a software that has to be modified with the changing usage scenarios of the application. We present DYNACUT, a dynamic code customization tool that can customize the application at its runtime with a minimal overhead. We use the execution traces of the application to customize the ap- plication according to user specifications. DYNACUT can identify code that is only used in the initial stages of the application execution (initialization code) and remove them. DYNA- CUT can also disable features of the application. To demonstrate its effectiveness, we built a prototype of DYNACUT and evaluated it on 9 real-world applications including NGINX, Lighttpd and 7 applications of the SPEC Intspeed benchmark suite. DYNACUT removes upto 56% of executed basic blocks and upto 10% of the application code when used to remove initialization code. The total overhead is in the range of 1.63 seconds for Lighttpd, 4.83 seconds for NGINX and about 39 seconds for perlbench in the SPEC suite.
29

Imagerie multispectrale, vers une conception adaptée à la détection de cibles / Multispectral imaging, a target detection oriented design

Minet, Jean 01 December 2011 (has links)
L’imagerie hyperspectrale, qui consiste à acquérir l'image d'une scène dans un grand nombre de bandes spectrales, permet de détecter des cibles là où l'imagerie couleur classique ne permettrait pas de conclure. Les imageurs hyperspectraux à acquisition séquentielle sont inadaptés aux applications de détection en temps réel. Dans cette thèse, nous proposons d’utiliser un imageur multispectral snapshot, capable d’acquérir simultanément un nombre réduit de bandes spectrales sur un unique détecteur matriciel. Le capteur offrant un nombre de pixels limité, il est nécessaire de réaliser un compromis en choisissant soigneusement le nombre et les profils spectraux des filtres de l'imageur afin d’optimiser la performance de détection. Dans cet objectif, nous avons développé une méthode de sélection de bandes qui peut être utilisée dans la conception d’imageurs multispectraux basés sur une matrice de filtres fixes ou accordables. Nous montrons, à partir d'images hyperspectrales issues de différentes campagnes de mesure, que la sélection des bandes spectrales à acquérir peut conduire à des imageurs multispectraux capables de détecter des cibles ou des anomalies avec une efficacité de détection proche de celle obtenue avec une résolution hyperspectrale. Nous développons conjointement un démonstrateur constitué d'une matrice de 4 filtres de Fabry-Perot accordables électroniquement en vue de son implantation sur un imageur multispectral snapshot agile. Ces filtres sont développés en technologie MOEMS (microsystèmes opto-électro-mécaniques) en partenariat avec l'Institut d'Electronique Fondamentale. Nous présentons le dimensionnement optique du dispositif ainsi qu'une étude de tolérancement qui a permis de valider sa faisabilité. / Hyperspectral imaging, which consists in acquiring the image of a scene in a large number of spectral bands, can be used to detect targets that are not visible using conventional color imaging. Hyperspectral imagers based on sequential acquisition are unsuitable for real-time detection applications. In this thesis, we propose to use a snapshot multispectral imager able to acquire simultaneously a small number of spectral bands on a single image sensor. As the sensor offers a limited number of pixels, it is necessary to achieve a trade-off by carefully choosing the number and the spectral profiles of the imager’s filters in order to optimize the detection performance. For this purpose, we developed a band selection method that can be used to design multispectral imagers based on arrays of fixed or tunable filters. We use real hyperspectral images to show that the selection of spectral bands can lead to multispectral imagers able to compete against hyperspectral imagers for target detection and anomaly detection applications while allowing snapshot acquisition and real-time detection. We jointly develop an adaptive snapshot multispectral imager based on an array of 4 electronically tunable Fabry-Perot filters. The filters are developed in MOEMS technology (Micro-Opto-Electro-Mechanical Systems) in partnership with the Institut d'Electronique Fondamentale. We present the optical design of the device and a study of tolerancing which has validated its feasibility.
30

Verdade ou mentira? Considerações sobre o flagrante, o pseudoflagrante e a composição na fotografia de German Lorca / Verdade ou mentira? Considerações sobre o flagrante, o pseudoflagrante e a composição na fotografia de German Lorca

Silva, Daniela Maura Abdel Nour Ribeiro da 24 April 2006 (has links)
Esta pesquisa intitulada Verdade ou mentira? Considerações sobre o flagrante, o pseudoflagrante e a composição na fotografia de German Lorca, tem como assunto a fotografia de rua que o fotógrafo paulistano German Lorca realizou entre o final da década de 1940 e início dos anos 1950, no âmbito do Foto-Cine Clube Bandeirante. O estudo demonstra a maneira como Lorca utiliza-se do flagrante e de seu falseamento (denominado livremente de pseudoflagrante na dissertação), muitas vezes enfatizando a composição da fotografia por meio do corte. A fim de atingir esse objetivo a dissertação fundamenta-se em questões que remontam à tradição da busca da representação do movimento na arte ocidental, passando pela fotografia de rua que vem sendo praticada desde meados do século XIX, no exterior e no Brasil. Assim, mostra como noções implícitas nesse amplo contexto teriam servido de parâmetros para a produção das cenas cotidianas de German Lorca, dentro da fotografia moderna brasileira. / This research is called Verdade ou Mentira? Considerações sobre o flagrante, o pseudoflagrante e a composição na fotografia de German Lorca (True or False? Considerations on The Snapshot, The Pseudosnapshot and The Composition in German Lorcas Photography). The research has as it subject the street photographs taken by Lorca a Sao Paulo photographer , from the late 1940s to the early 1950s, within the scope of the Foto-Cine Clube Bandeirante (Bandeirante Photo-Cine Club). The study shows how Lorca uses the snapshot and its forgery (loosely called pseudosnaphot in this dissertation), often emphasizing the photographs composition by means of the cut. To meet this objective, the dissertation is based on issues that go back to western arts traditional search to represent movement, through street photographs that have been taken since the mid-nineteenth century, both in Brazil and abroad. In this way, it shows how implicit notions in this broad context would have served as parameters for German Lorcas production of everyday scenes, within modern Brazilian photography.

Page generated in 0.4236 seconds