Spelling suggestions: "subject:"llm""
71 |
Constitucionalidad y Legalidad en la aplicación de la Jornada de Trabajo Atípica Acumulativa en la Minería y Casuística aplicada / Constitucionalidad y Legalidad en la aplicación de la Jornada de Trabajo Atípica Acumulativa en la Minería y Casuística aplicadaVela Gonzales, Carlos Gil, Cadenillas Rabanal, Cristian Attilio 13 October 2020 (has links)
La importancia de la Jornada Atípica Acumulativa en la minería en nuestro país radica en que facilita mayores niveles de producción y utiliza para ello la fuerza laboral de los trabajadores mineros en ciclos de trabajo (o también llamados sistemas de trabajo) que consisten en períodos acumulativos de trabajo y que son compensados también con períodos acumulativos de descanso, pero para ello debe cumplirse con diversos parámetros constitucionales (Constitución Política de 1993, Sentencias del Tribunal Constitucional y Convenio 01 de la Organización Internacional del Trabajo - OIT) así como con diversos parámetros legales (TUO de la Ley de Jornada y su Reglamento, Consolidación de los Descansos Remunerados y su Reglamento) y otros parámetros interpretativos sumamente importantes (Directivas e Informes del Ministerio de Trabajo – MTPE). De contar con dichos parámetros la jornada atípica acumulativa en la minería será constitucionalmente y legalmente válida.
Más, luego de precisar la constitucionalidad y legalidad de este tipo de jornada de trabajo, corresponde ahora definir algunos aspectos importantes de casuística respecto al cálculo de la Jornada Atípica Acumulativa en la Minería que servirán para despejar algunas dudas en este tema poco estudiado aún en el Derecho Laboral y en el Derecho Minero. Y finalmente corresponde desarrollar una propuesta normativa que regule todos los aspectos necesarios para la aplicación de este tipo especial de jornadas de trabajo. / The importance of Atypical Cumulative Working Day in mining in our country is that it facilitates increased production and uses for this labor force miners to work cycles (also called work systems) consisting of periods cumulative work and are also offset by cumulative periods of rest, but this should be met with various constitutional parameters (1993 Constitution, Constitutional Court rulings and Convention 01 of the International Labour Organization - ILO) as well as various legal parameters (Journey Act and Regulations, Consolidation of Remunerated Breaks and Regulations) and other highly important interpretative parameters (Directives and Reports of the Ministry of Labour - MLEP). To have these parameters cumulative atypical journey in mining will be constitutionally and legally valid.
Moreover, after specifying the constitutionality and legality of this type of working day, it is now necessary to define some important aspects of casuistry regarding the calculation of the Cumulative Atypical Working Day in Mining that will serve to clear up some doubts in this little studied subject still in the Labor Law and Mining Law. And finally it corresponds to develop a normative proposal that regulates all the necessary aspects for the application of this special type of working days. / Tesis
|
72 |
Les algorithmes de haute résolution en tomographie d'émission par positrons : développement et accélération sur les cartes graphiquesNassiri, Moulay Ali 05 1900 (has links)
La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul.
Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP.
Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes :
- Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement.
- Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées.
Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies. / Positron emission tomography (PET) is a molecular imaging modality that uses radiotracers labeled with positron emitting isotopes in order to quantify many biological processes. The clinical applications of this modality are largely in oncology, but it has a potential to be a reference exam for many diseases in cardiology, neurology and pharmacology. In fact, it is intrinsically able to offer the functional information of cellular metabolism with a good sensitivity. The principal limitations of this modality are the limited spatial resolution and the limited accuracy of the quantification. To overcome these limits, the recent PET systems use a huge number of small detectors with better performances. The image reconstruction is also done using accurate algorithms such as the iterative stochastic algorithms. But as a consequence, the time of reconstruction becomes too long for a clinical use. So the acquired data are compressed and the accelerated versions of iterative stochastic algorithms which generally are non convergent are used to perform the reconstruction. Consequently, the obtained performance is compromised.
In order to be able to use the complex reconstruction algorithms in clinical applications for the new PET systems, many previous studies were aiming to accelerate these algorithms on GPU devices. Therefore, in this thesis, we joined the effort of researchers for developing and introducing for routine clinical use the accurate reconstruction algorithms that improve the spatial resolution and the accuracy of quantification for PET.
Therefore, we first worked to develop the new strategies for accelerating on GPU devices the reconstruction from list mode acquisition. In fact, this mode offers many advantages over the histogram-mode, such as motion correction, the possibility of using time-of-flight (TOF) information to improve the quantification accuracy, the possibility of using temporal basis functions to perform 4D reconstruction and extract kinetic parameters with better accuracy directly from the acquired data. But, one of the main obstacles that limits the use of list-mode reconstruction approach for routine clinical use is the relatively long reconstruction time. To overcome this obstacle we :
developed a new strategy to accelerate on GPU devices fully 3D list mode ordered-subset expectation-maximization (LM-OSEM) algorithm, including the calculation of the sensitivity matrix that accounts for the patient-specific attenuation and normalisation corrections. The reported reconstruction are not only compatible with a clinical use of 3D LM-OSEM algorithms, but also lets us envision fast reconstructions for advanced PET applications such as real time dynamic studies and parametric image reconstructions.
developed and implemented on GPU a multigrid/multiframe approach of an expectation-maximization algorithm for list-mode acquisitions (MGMF-LMEM). The objective is to develop new strategies to accelerate the reconstruction of gold standard LMEM (list-mode expectation-maximization) algorithm which converges slowly. The GPU-based MGMF-LMEM algorithm processed data at a rate close to one million of events per second per iteration, and permits to perform near real-time reconstructions for large acquisitions or low-count acquisitions such as gated studies.
Moreover, for clinical use, the quantification is often done from acquired data organized in sinograms. This data is generally compressed in order to accelerate reconstruction. But previous works have shown that this approach to accelerate the reconstruction decreases the accuracy of quantification and the spatial resolution. The ordered-subset expectation-maximization (OSEM) is the most used reconstruction algorithm from sinograms in clinic. Thus, we parallelized and implemented the attenuation-weighted line-of-response OSEM (AW-LOR-OSEM) algorithm which allows a PET image reconstruction from sinograms without any data compression and incorporates the attenuation and normalization corrections in the sensitivity matrices as weight factors. We compared two strategies of implementation: in the first, the system matrix (SM) is calculated on the fly during the reconstruction, while the second implementation uses a precalculated SM more accurately. The results show that the computational efficiency is about twice better for the implementation using calculated SM on-the-fly than the implementation using pre-calculated SM, but the reported reconstruction times are compatible with a clinical use for both strategies.
|
73 |
OpenBSD Hardware Sensors — Environmental Monitoring and Fan ControlMurenin, Constantine Aleksandrovich 18 May 2010 (has links)
This thesis discusses the motivation, origin, history, design guidelines, API, the device drivers and userland utilities of the hardware sensors framework available in OpenBSD. The framework spans multiple utilities in the base system and the ports tree, is utilised by over 75 drivers, and is considered to be a distinctive and ready-to-use feature that sets OpenBSD apart from many other operating systems, and in its root is inseparable from the OpenBSD experience.
The present framework, however, is missing the functionality that would allow the user to interface with the fan-controlling part of the hardware monitors. We therefore discuss the topic of fan control and introduce sysctl-based interfacing with the fan-controlling capabilities of microprocessor system hardware monitors. The discussed prototype implementation reduces the noise and power-consumption characteristics in fans of personal computers, especially of those PCs that are designed from off-the-shelf components. We further argue that our prototype is easier, more intuitive and robust compared to solutions available elsewhere.
|
74 |
Improved estimation in threshold regression with applications to price transmission modeling / Verbessertes Schätzen von Threshold Regressionsmodellen mit Anwendungen in der PreistransmissionsanalyseGreb, Friederike 30 January 2012 (has links)
No description available.
|
75 |
Ein systemorientierter Ansatz zur Modularisierung von Planspielen mit dem Ziel der Komplexitätssteuerung und Integration in Standardsoftware / A system-oriented approach to modularize business games with the aim of controlling the complexity and integration into standard softwareFischer, Helge 05 July 2006 (has links)
No description available.
|
76 |
OpenBSD Hardware Sensors — Environmental Monitoring and Fan ControlMurenin, Constantine Aleksandrovich 18 May 2010 (has links)
This thesis discusses the motivation, origin, history, design guidelines, API, the device drivers and userland utilities of the hardware sensors framework available in OpenBSD. The framework spans multiple utilities in the base system and the ports tree, is utilised by over 75 drivers, and is considered to be a distinctive and ready-to-use feature that sets OpenBSD apart from many other operating systems, and in its root is inseparable from the OpenBSD experience.
The present framework, however, is missing the functionality that would allow the user to interface with the fan-controlling part of the hardware monitors. We therefore discuss the topic of fan control and introduce sysctl-based interfacing with the fan-controlling capabilities of microprocessor system hardware monitors. The discussed prototype implementation reduces the noise and power-consumption characteristics in fans of personal computers, especially of those PCs that are designed from off-the-shelf components. We further argue that our prototype is easier, more intuitive and robust compared to solutions available elsewhere.
|
77 |
Groundwater-stream water interactions: point and distributed measurements and innovative upscaling technologiesGaona Garcia, Jaime 27 June 2019 (has links)
The need to consider groundwater and surface water as a single resource has fostered the interest of the scientific community on the interactions between surface water and groundwater. The region below and alongside rivers where surface hydrology and subsurface hydrology concur is the hyporheic zone. This is the region where water exchange determines many biogeochemical and ecological processes of great impact on the functioning of rivers. However, the complex processes taking place in the hyporheic zone require a multidisciplinary approach.
The combination of innovative point and distributed techniques originally developed in separated disciplines is of great advantage for the indirect identification of water exchange in the hyporheic zone. Distributed techniques using temperature as a tracer such as fiber-optic distributed temperature sensing can identify the different components of groundwater-surface water interactions based on their spatial and temporal thermal patterns at the sediment-water interface. In particular, groundwater, interflow discharge and local hyporheic exchange flows can be differentiated based on the distinct size, duration and sign of the temperature anomalies. The scale range and resolution of fiber-optic distributed temperature sensing are well complemented by geophysics providing subsurface structures with a similar resolution and scale. Thus, the use of fiber-optic distributed temperature sensing to trace flux patterns supported by the exploration of subsurface structures with geophysics enables spatial and temporal investigation of groundwater-surface water interactions with an unprecedented level of accuracy and resolution.
In contrast to the aforementioned methods that can be used for pattern identification at the interface, other methods such as point techniques are required to quantify hyporheic exchange fluxes. In the present PhD thesis, point methods based on hydraulic gradients and thermal profiles are used to quantify hyporheic exchange flows. However, both methods are one-dimensional methods and assume that only vertical flow occurs while the reality is much more complex. The study evaluates the accuracy of the available methods and the factors that impact their reliability. The applied methods allow not only to quantify hyporheic exchange flows but they are also the basis for an interpretation of the sediment layering in the hyporheic zone.
For upscaling of the previous results three-dimensional modelling of flow and heat transport in the hyporheic zone combines pattern identification and quantification of fluxes into a single framework. Modelling can evaluate the influence of factors governing groundwater-surface water interactions as well as assess the impact of multiple aspects of model design and calibration of high impact on the reliability of the simulations. But more importantly, this modelling approach enables accurate estimation of water exchange at any location of the domain with unparalleled resolution. Despite the challenges in 3D modelling of the hyporheic zone and in the integration of point and distributed data in models, the benefits should encourage the hyporheic community to adopt an integrative approach comprising from the measurement to the upscaling of hyporheic processes.
|
78 |
Forage quality, animal performance, and carcass traits of steers finished on winter annual ryegrass (Lolium multiflorum Lam.) pasture with varying levels of corn supplementationRoberts, Sean David, Kerth, Christopher R. January 2005 (has links) (PDF)
Thesis(M.S.)--Auburn University, 2005. / Abstract. Vita. Includes bibliographic references.
|
79 |
Large-Context Question Answering with Cross-Lingual TransferSagen, Markus January 2021 (has links)
Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. Two of the most prominent currently are the lack of high performing non-English pre-trained models, and the limited number of words most trained models can incorporate for their context. Solving these problems would make NLP models more suitable for real-world applications, improving information retrieval, reading comprehension, and more. All previous research has focused on incorporating long-context for English language models. This thesis investigates the cross-lingual transferability between languages when only training for long-context in English. Training long-context models in English only could make long-context in low-resource languages, such as Swedish, more accessible since it is hard to find such data in most languages and costly to train for each language. This could become an efficient method for creating long-context models in other languages without the need for such data in all languages or pre-training from scratch. We extend the models’ context using the training scheme of the Longformer architecture and fine-tune on a question-answering task in several languages. Our evaluation could not satisfactorily confirm nor deny if transferring long-term context is possible for low-resource languages. We believe that using datasets that require long-context reasoning, such as a multilingual TriviaQAdataset, could demonstrate our hypothesis’s validity.
|
Page generated in 0.0279 seconds