Please read these instructions before posting any event on Fermilab Indico
Indico will be unavailable on Monday Feb 17th from 4:30-5PM CST due to server maintenance.
Zoom coordinates shared via email.
In this project, we aim to present a proof-of-concept demonstration of the in-storage image processing capabilities of field-programmable-gate-arrays (FPGA), which could potentially act as a baseline component of the prompt astronomical image processing pipeline of the future Rubin observatory. A successful extension of this work to a larger scale would furthermore expected to see practical use in supernova detection endeavors and multi-messenger astronomy. We use the Dark Energy Survey (DES) data in our preliminary work, and replicate certain components of the DES pre-processing pipeline in ccdproc/astropy, C++, and Vitis HLS for verification purposes. The background information regarding astronomical and CCD image processing will also be provided.
In today's data-driven world, tape storage systems continue to play a crucial role in long-term data retention for various industries, such as scientific research, and archival repositories, particularly at National Laboratories like Fermilab. Optimizing the performance of these tape storage systems is of paramount importance to ensure efficient data access and retrieval, given the massive volumes of data involved. In this study, we present a comprehensive approach to simulating tape storage systems and leveraging this simulation framework to compute performance analytics. The proposed simulation framework is designed to accurately model the behavior of modern tape storage systems, including various components such as tape drives, libraries, and media. By capturing the intricacies of the system's architecture and operational characteristics, the simulation enables us to emulate real-world scenarios and assess the system's performance under different workloads and configurations. Through extensive performance analysis, we identify key factors influencing system performance, such as data compression, and tape utilization patterns.
New scientific hypotheses require time- and labor-intensive human effort. We are developing a method for the automated generation of new scientific hypotheses. This method uses graphs of concept keywords in published papers as the main data structure. The goal is to create an ensemble network that uses a graph neural network (GNN) to predict new links between concept keywords, and an LLM fine-tuned on physics literature for providing detailed hypotheses. The predicted edges produced by the GNN are used to retrieve linked concepts (e.g., the combination of two keywords) for the prompts fed to the LLM. In this project, I focused on conducting a comparative analysis of state-of-the-art edge prediction algorithms for linking scientific concepts.