• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 7
  • 6
  • Tagged with
  • 1540
  • 174
  • 141
  • 128
  • 125
  • 122
  • 118
  • 118
  • 113
  • 92
  • 92
  • 91
  • 83
  • 79
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

A study of the binary systems salicylic acid-biphenyl and salicylic acid-diphenylamine

Marsh, Lloyd Russell January 1940 (has links)
1. From a study of the system salicylic acid-biphenyl it was concluded that there was no compound formation in the system. The solution is very nearly ideal, having an eutectic temperature of 67.6℃ at a mole fraction of .903 for the biphenyl. 2. The system salicylic acid-diphenylamine was studied and no compound formation was found to be present. The system and no compound formation was found to be present. The system is not as ideal as the salicylic acid-biphenyl system, but follows the ideal solution curve fairly well. The system has an eutectic temperature of 48.5℃ at .926 mole fraction of diphenylamine. / M.S.
322

Formal Verification Techniques for Reversible Circuits

Limaye, Chinmay Avinash 27 June 2011 (has links)
As the number of transistors per unit chip area increases, the power dissipation of the chip becomes a bottleneck. New nano-technology materials have been proposed as viable alternatives to CMOS to tackle area and power issues. The power consumption can be minimized by the use of reversible logic instead of conventional combinational circuits. Theoretically, reversible circuits do not consume any power (or consume minimal power) when performing computations. This is achieved by avoiding information loss across the circuit. However, use of reversible circuits to implement digital logic requires development of new Electronic Design Automation techniques. Several approaches have been proposed and each method has its own pros and cons. This often results in multiple designs for the same function. Consequently, this demands research in efficient equivalence checking techniques for reversible circuits. This thesis explores the optimization and equivalence checking of reversible circuits. Most of the existing synthesis techniques work in two steps — generate an original, often sub-optimal, implementation for the circuit followed optimization of this design. This work proposes the use of Binary Decision Diagrams for optimization of reversible circuits. The proposed technique identifies repeated gate (trivial) as well as non-contiguous redundancies in a reversible circuit. Construction of a BDD for a sub-circuit (obtained by sliding a window of fixed size over the circuit) identifies redundant gates based upon the redundant variables in the BDD. This method was unsuccessful in identifying any additional redundancies in benchmark circuits; however, hidden non-contiguous redundancies were consistently identified for a family of randomly generated reversible circuits. As of now, several research groups focus upon efficient synthesis of reversible circuits. However, little work has been done in identification of redundant gates in existing designs and the proposed peephole optimization method stands among the few known techniques. This method fails to identify redundancies in a few cases indicating the complexity of the problem and the need for further research in this area. Even for simple logical functions, multiple circuit representations exist which exhibit a large variation in the total number of gates and circuit structure. It may be advantageous to have multiple implementations to provide flexibility in choice of implementation process but it is necessary to validate the functional equivalence of each such design. Equivalence checking for reversible circuits has been researched to some extent and a few pre-processing techniques have been proposed prior to this work. One such technique involves the use of Reversible Miter circuits followed by SAT-solvers to ascertain equivalence. The second half of this work focuses upon the application of the proposed reduction technique to Reversible Miter circuits as a pre-processing step to improve the efficiency of the subsequent SAT-based equivalence checking. / Master of Science
323

Equations of state with group contribution binary interaction parameters for calculation of two-phase envelopes for synthetic and real natural gas mixtures with heavy fractions

Nasrifar, K., Rahmanian, Nejat 03 1900 (has links)
Yes / Three equations of state with a group contribution model for binary interaction parameters were employed to calculate the vapor-liquid equilibria of synthetic and real natural gas mixtures with heavy fractions. In order to estimate the binary interaction parameters, critical temperatures, critical pressures and acentric factors of binary constituents of the mixture are required. The binary interaction parameter model also accounts for temperature. To perform phase equilibrium calculations, the heavy fractions were first discretized into 12 Single Carbon Numbers (SCN) using generalized molecular weights. Then, using the generalized molecular weights and specific gravities, the SCN were characterized. Afterwards, phase equilibrium calculations were performed employing a set of (nc + 1) equations where nc stands for the number of known components plus 12 SCN. The equations were solved iteratively using Newton's method. Predictions indicate that the use of binary interaction parameters for highly sour natural gas mixtures is quite important and must not be avoided. For sweet natural gas mixtures, the use of binary interaction parameters is less remarkable, however.
324

Axion clouds around black holes in inspiraling binaries / インスパイラルする連星におけるブラックホール周りのアクシオン雲

Takahashi, Takuya 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第25108号 / 理博第5015号 / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)教授 田中 貴浩, 准教授 久徳 浩太郎, 教授 橋本 幸士 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
325

Gradient Boosted Decision Tree Application to Muon Identification in the KLM at Belle II

Benninghoff, Logan Dean 23 May 2024 (has links)
We present the results of applying a Fast Boosted Decision Tree (FBDT) algorithm to the task of distinguishing muons from pions in K-Long and Muon (KLM) detector of the Belle II experiment. Performance was evaluated over a momentum range of 0.6 < p < 5.0 GeV/c by plotting Receiver Operating Characteristic (ROC) curves for 0.1 GeV/c intervals. The FBDT model was worse than the benchmark likelihood ratio test model for the whole momentum range during testing on Monte Carlo (MC) simulated data. This is seen in the lower Area Under the Curve (AUC) values for the FBDT ROC curves, achieving peak AUC values around 0.82, while the likelihood ratio ROC curves achieve peak AUC values around 0.98. Performance of the FBDT model in muon identification may be improved in the future by adding a pre-processing routine for the MC data and input variables. / Master of Science / An important task of a high-energy physics experiment is taking the input information provided by detectors, such as the distance a particle travels through a detector, the momentum, and energy deposits it makes, and using that information to identify the particle's type. In this study we test a machine learning model that sorts the particles observed into two categories—muons and pions—by comparing the particle's input values to a threshold value at multiple stages, then assigns a final identity to the particle at the last stage. This is compared to a benchmark model that uses the probabilities that these input variables would be seen from a particle of each type to determine which particle type is most likely. The ability of both models to distinguish muons and pions were tested on simulated data from the Belle II detector, and the benchmark model outperformed the machine learning model.
326

Using Geographic Information Systems To Examine Unmet Healthcare Needs Among Transgender and Non-Binary Young Adults in Florida

Franklin, Nino 01 January 2024 (has links) (PDF)
This study explored healthcare utilization among the Transgender and Gender Non-Binary (TGNB) population of Florida using Geographic Information Systems (GIS) to visualize and analyze the spatial distribution of unmet healthcare needs. The aim was to provide a clear comparison of unmet healthcare needs across various regions, highlight areas with the highest and lowest levels of unmet needs, and understand the demographic factors influencing these disparities. Utilizing survey data from the NIH-funded U=CARE study, which involved TGNB participants aged 18-26 from diverse racial/ethnic and socioeconomic backgrounds, the data were cleaned, geocoded, and analyzed within ArcGIS. Geocoded survey responses were linked to Florida Department of Transportation (FDOT) district boundaries. Choropleth maps were created to represent the percentage of respondents in each geographic unit reporting unmet healthcare needs, with color gradation indicating the intensity of these needs. Regional variations were found, with Northeast Florida and Northwest Florida showing the highest levels of unmet healthcare needs despite having the lowest participant counts, while Central Florida, which had the highest number of participants, also reported a substantial percentage of unmet healthcare needs. A demographic analysis indicated that younger participants, those with lower education levels, and individuals from diverse racial and ethnic backgrounds were more likely to report unmet healthcare needs. Districts with lower socioeconomic status (SES) showed higher levels of unmet needs, underscoring the critical role of socioeconomic factors in healthcare access. This study identifies specific regions and demographic groups with significant unmet healthcare needs, informing targeted healthcare interventions and policies. By integrating spatial and demographic analysis, it provides a comprehensive understanding of healthcare disparities among TGNB young adults in Florida, contributing valuable insights for improving health outcomes across diverse populations and addressing the specific healthcare challenges faced by this community.
327

Kronecker's Theory of Binary Bilinear Forms with Applications to Representations of Integers as Sums of Three Squares

Constable, Jonathan A. 01 January 2016 (has links)
In 1883 Leopold Kronecker published a paper containing “a few explanatory remarks” to an earlier paper of his from 1866. His work loosely connected the theory of integral binary bilinear forms to the theory of integral binary quadratic forms. In this dissertation we discover the statements within Kronecker's paper and offer detailed arithmetic proofs. We begin by developing the theory of binary bilinear forms and their automorphs, providing a classification of integral binary bilinear forms up to equivalence, proper equivalence and complete equivalence. In the second chapter we introduce the class number, proper class number and complete class number as well as two refinements, which facilitate the development of a connection with binary quadratic forms. Our third chapter is devoted to deriving several class number formulas in terms of divisors of the determinant. This chapter also contains lower bounds on the class number for bilinear forms and classifies when these bounds are attained. Lastly, we use the class number formulas to rigorously develop Kronecker's connection between binary bilinear forms and binary quadratic forms. We supply purely arithmetic proofs of five results stated but not proven in the original paper. We conclude by giving an application of this material to the number of representations of an integer as a sum of three squares and show the resulting formula is equivalent to the well-known result due to Gauss.
328

INFERENCE OF RESIDUAL ATTACK SURFACE UNDER MITIGATIONS

Kyriakos K Ispoglou (6632954) 14 May 2019 (has links)
<div>Despite the broad diversity of attacks and the many different ways an adversary can exploit a system, each attack can be divided into different phases. These phases include the discovery of a vulnerability in the system, its exploitation and the achieving persistence on the compromised system for (potential) further compromise and future access. Determining the exploitability of a system –and hence the success of an attack– remains a challenging, manual task. Not only because the problem cannot be formally defined but also because advanced protections and mitigations further complicate the analysis and hence, raise the bar for any successful attack. Nevertheless, it is still possible for an attacker to circumvent all of the existing defenses –under certain circumstances.</div><div><br></div><div>In this dissertation, we define and infer the Residual Attack Surface on a system. That is, we expose the limitations of the state-of-the-art mitigations, by showing practical ways to circumvent them. This work is divided into four parts. It assumes an attack with three phases and proposes new techniques to infer the Residual Attack Surface on each stage.</div><div><br></div><div>For the first part, we focus on the vulnerability discovery. We propose FuzzGen, a tool for automatically generating fuzzer stubs for libraries. The synthesized fuzzers are target specific, thus resulting in high code coverage. This enables developers to expose and fix vulnerabilities (that reside deep in the code and require initializing a complex state to trigger them), before they can be exploited. We then move to the vulnerability exploitation part and we present a novel technique called Block Oriented Programming (BOP), that automates data-only attacks. Data-only attacks defeat advanced control-flow hijacking defenses such as Control Flow Integrity. Our framework, called BOPC, maps arbitrary exploit payloads into execution traces and encodes them as a set of memory writes. Therefore an attacker’s intended execution “sticks” to the execution flow of the underlying binary and never departs from it. In the third part of the dissertation, we present an extension of BOPC that presents some measurements that give strong indications of what types of exploit payloads are not possible to execute. Therefore, BOPC enables developers to test what data an attacker would compromise and enables evaluation of the Residual Attack Surface to assess an application’s risk. Finally, for the last part, which is to achieve persistence on the compromised system, we present a new technique to construct arbitrary malware that evades current dynamic and behavioral analysis. The desired malware is split into hundreds (or thousands) of little pieces and each piece is injected into a different process. A special emulator coordinates and synchronizes the execution of all individual pieces, thus achieving a “distributed execution” under multiple address spaces. malWASH highlights weaknesses of current dynamic and behavioral analysis schemes and argues for full-system provenance.</div><div><br></div><div>Our envision is to expose all the weaknesses of the deployed mitigations, protections and defenses through the Residual Attack Surface. That way, we can help the research community to reinforce the existing defenses, or come up with new, more effective ones.</div>
329

Análise dos caminhos de execução de programas para a paralelização automática de códigos binários para a plataforma Intel x86 / Analysis of the execution paths of programs to perform automatic parallelization of binary codes on the platform Intel x86

Eberle, André Mantini 06 October 2015 (has links)
Aplicações têm tradicionalmente utilizado o paradigma de programação sequencial. Com a recente expansão da computação paralela, em particular os processadores multinúcleo e ambientes distribuídos, esse paradigma tornou-se um obstáculo para a utilização dos recursos disponíveis nesses sistemas, uma vez que a maior parte das aplicações tornam-se restrita à execução sobre um único núcleo de processamento. Nesse sentido, este trabalho de mestrado introduz uma abordagem para paralelizar programas sequenciais de forma automática e transparente, diretamente sobre o código-binário, de forma a melhor utilizar os recursos disponíveis em computadores multinúcleo. A abordagem consiste na desmontagem (disassembly) de aplicações Intel x86 e sua posterior tradução para uma linguagem intermediária. Em seguida, são produzidos grafos de fluxo e dependências, os quais são utilizados como base para o particionamento das aplicações em unidades paralelas. Por fim, a aplicação é remontada (assembly) e traduzida novamente para a arquitetura original. Essa abordagem permite a paralelização de aplicações sem a necessidade de esforço suplementar por parte de desenvolvedores e usuários. / Traditionally, computer programs have been developed using the sequential programming paradigm. With the advent of parallel computing systems, such as multi-core processors and distributed environments, the sequential paradigm became a barrier to the utilization of the available resources, since the program is restricted to a single processing unit. To address this issue, we introduce a transparent automatic parallelization methodology using a binary rewriter. The steps involved in our approach are: the disassembly of an Intel x86 application, transforming it into an intermediary language; analysis of this intermediary code to obtain flow and dependency graphs; partitioning of the application into parallel units, using the obtained graphs and posterior reassembly of the application, writing it back to the original Intel x86 architecture. By transforming the compiled application software, we aim at obtaining a program which can explore the parallel resources, with no extra effort required either from users or developers.
330

Análise dos caminhos de execução de programas para a paralelização automática de códigos binários para a plataforma Intel x86 / Analysis of the execution paths of programs to perform automatic parallelization of binary codes on the platform Intel x86

André Mantini Eberle 06 October 2015 (has links)
Aplicações têm tradicionalmente utilizado o paradigma de programação sequencial. Com a recente expansão da computação paralela, em particular os processadores multinúcleo e ambientes distribuídos, esse paradigma tornou-se um obstáculo para a utilização dos recursos disponíveis nesses sistemas, uma vez que a maior parte das aplicações tornam-se restrita à execução sobre um único núcleo de processamento. Nesse sentido, este trabalho de mestrado introduz uma abordagem para paralelizar programas sequenciais de forma automática e transparente, diretamente sobre o código-binário, de forma a melhor utilizar os recursos disponíveis em computadores multinúcleo. A abordagem consiste na desmontagem (disassembly) de aplicações Intel x86 e sua posterior tradução para uma linguagem intermediária. Em seguida, são produzidos grafos de fluxo e dependências, os quais são utilizados como base para o particionamento das aplicações em unidades paralelas. Por fim, a aplicação é remontada (assembly) e traduzida novamente para a arquitetura original. Essa abordagem permite a paralelização de aplicações sem a necessidade de esforço suplementar por parte de desenvolvedores e usuários. / Traditionally, computer programs have been developed using the sequential programming paradigm. With the advent of parallel computing systems, such as multi-core processors and distributed environments, the sequential paradigm became a barrier to the utilization of the available resources, since the program is restricted to a single processing unit. To address this issue, we introduce a transparent automatic parallelization methodology using a binary rewriter. The steps involved in our approach are: the disassembly of an Intel x86 application, transforming it into an intermediary language; analysis of this intermediary code to obtain flow and dependency graphs; partitioning of the application into parallel units, using the obtained graphs and posterior reassembly of the application, writing it back to the original Intel x86 architecture. By transforming the compiled application software, we aim at obtaining a program which can explore the parallel resources, with no extra effort required either from users or developers.

Page generated in 0.0304 seconds