<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>DSpace Community: KAIST School of Computing</title>
    <link>http://hdl.handle.net/10203/8</link>
    <description>KAIST School of Computing</description>
    <pubDate>Mon, 16 Mar 2026 05:05:12 GMT</pubDate>
    <dc:date>2026-03-16T05:05:12Z</dc:date>
    
    <item>
      <title>Refined myocardium segmentation from CT using a Hybrid-Fusion transformer</title>
      <link>http://hdl.handle.net/10203/339186</link>
      <description>Title: Refined myocardium segmentation from CT using a Hybrid-Fusion transformer
Authors: Qin, Shihua; Xing, Fangxu; Cho, Jihoon; Park, Jinah; Liu, Xiaofeng; Rouhollahi, Amir; Farhat, Elias J. Bou; Javadikasgari, Hoda; Sabe, Ashraf; Nezami, Farhad R.; Woo, Jonghye; Aganj, Iman
Abstract: Accurate segmentation of the left ventricle (LV) in cardiac CT images is crucial for assessing ventricular function and diagnosing cardiovascular diseases. Creating a sufficiently large training set with accurate manual labels of LV can be cumbersome. More efficient semi-automatic segmentation, however, often includes unwanted structures, such as papillary muscles, due to low contrast between the LV wall and surrounding tissues. This study introduces a two-input-channel method within a Hybrid-Fusion Transformer deep-learning framework to produce refined LV labels from a combination of CT images and semi-automatic rough labels, effectively removing papillary muscles. By leveraging the efficiency of semi-automatic LV segmentation, we train an automatic refined segmentation model on a small set of images with both refined manual and rough semi-automatic labels. Evaluated through quantitative cross-validation, our method outperformed models that used only either CT images or rough masks as input.</description>
      <pubDate>Mon, 01 Jun 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/10203/339186</guid>
      <dc:date>2026-06-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>ADP-QFed: Privacy-Preserving Quantized Federated Learning for Intelligent Edge Sensing IoT Systems</title>
      <link>http://hdl.handle.net/10203/339605</link>
      <description>Title: ADP-QFed: Privacy-Preserving Quantized Federated Learning for Intelligent Edge Sensing IoT Systems
Authors: Tariq, Omer; Dastagir, Muhammad Bilal Akram; Han, Dong-Soo
Abstract: Federated learning (FL) enables decentralized model training but faces critical challenges in jointly optimizing privacy, accuracy, and communication efficiency, essential for resource-constrained wireless internet of things (IoT) deployments. We introduce an adaptive differentially private quantized FL (ADP-QFed) framework that addresses these challenges through layer-wise adaptive noise injection and dual-bit deterministic quantization. By computing layer-specific sensitivity and importance scores, ADP-QFed dynamically calibrates privacy noise to minimize accuracy loss while ensuring rigorous ( epsilon , delta )-differential privacy (DP) guarantees. The framework employs n-bit quantization for local computation and m-bit quantization for transmission, reducing communication overhead by up to 75%. Experiments on MNIST, FMNIST, and CIFAR-10 achieve test accuracies of 99.41%, 91.06%, and 82.94%, respectively, outperforming existing privacy-preserving FL methods by an average of 3.5%. These results are obtained while maintaining a privacy budget under epsilon=2.25 , representing a 40% reduction compared to state-of-the-art methods at similar accuracy levels. ADP-QFed advances practical privacy-preserving FL for edge sensing in low-altitude IoT systems by simultaneously optimizing privacy guarantees, model utility, and energy efficiency in wireless environments.</description>
      <pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/10203/339605</guid>
      <dc:date>2026-03-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>DANCE plus plus : Differentiable Accelerator/Network Co-Exploration With Hard Constraints and Data-Free Training for Real-World Scenarios</title>
      <link>http://hdl.handle.net/10203/338784</link>
      <description>Title: DANCE plus plus : Differentiable Accelerator/Network Co-Exploration With Hard Constraints and Data-Free Training for Real-World Scenarios
Authors: Choi, Kanghyun; Hong, Deokki; Lee, Hyeyoon; Yu, Joonsang; Park, Noseong; Kim, Youngsok; Lee, Jinho
Abstract: Co-exploration of neural architectures and hardware accelerators has emerged as a promising approach to address computational cost problems, especially in low-profile systems. However, existing co-exploration methods based on reinforcement learning or evolutionary search suffer from substantial search costs. To address this, this work presents DANCE++, a differentiable approach toward the co-exploration of hardware and network architecture design. At the heart of DANCE++ is a differentiable evaluator network that models hardware metrics with a neural network, enabling accelerator design through backpropagation. DANCE++ significantly reduces search time and enhances accuracy and hardware cost metrics compared to traditional approaches. To further address real-world scenarios, this work embodies two important practical topics: 1) hard constraints and 2) data dependency. To meet the constraints, such as frame rates, this work proposes a gradient manipulation algorithm that guides differentiable optimization to find hard-constrained solutions. Also to consider cases where training dataset is inaccessible, this work proposes to use data-free training methods in both co-exploration and training phases. To the best of our knowledge, DANCE++ is the first co-exploration method that targets these real-world challenges, supported by extensive experiments demonstrating its effectiveness.</description>
      <pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/10203/338784</guid>
      <dc:date>2026-02-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Adaptability of Vision Foundation Models for 3D Medical Image Segmentation</title>
      <link>http://hdl.handle.net/10203/338969</link>
      <description>Title: Adaptability of Vision Foundation Models for 3D Medical Image Segmentation
Authors: Ahn, Suhyun; Lee, Donggyu; Park, Jinah
Abstract: Vision Foundation Models (VFMs), such as DINOv2 and SAM, have demonstrated unprecedented generalizability in natural imaging and show strong promise in medical imaging due to their semantically rich representations. However, their effective application to 3D volumetric segmentation remains largely underexplored, especially concerning optimal adaptation strategies for transferring 2D pre-trained knowledge to the structurally disparate 3D domain. To address this, we present a comprehensive investigation into the transferability and task-specific adaptability of six diverse 2D VFMs (including Self-Supervised, Vision-Language, and Segmentation Generalists) for 3D medical image segmentation. We systematically evaluate four distinct transfer learning paradigms, including advanced Fine-Tuning methods, across four heterogeneous 3D medical datasets. Our results establish VFMs as a powerful and cost-effective generalist baseline, consistently outperforming non-pretrained and standard 3D ViT architectures despite the substantial domain shift. Crucially, our systematic exploration reveals that parameter-efficient fine-tuning achieves the highest segmentation accuracy across all datasets. Feature-level analyses using PCA and CKA provide key insights, confirming that optimal performance stems from successfully balancing the preservation of generalizable low-level visual features with the adaptation of high-level, task-specific semantics.</description>
      <pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/10203/338969</guid>
      <dc:date>2026-02-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

