SCI和EI收录∣中国化工学会会刊

中国化学工程学报 ›› 2025, Vol. 87 ›› Issue (11): 103-114.DOI: 10.1016/j.cjche.2025.05.023

• • 上一篇    下一篇

MicroFlowSAM: A motion-prompted instance segmentation approach in microfluidics with zero annotation and training

Wenle Xu1,2, Lin Sheng1,2, Tong Qiu1,2, Kai Wang1,2, Guangsheng Luo1,2   

  1. 1. Department of Chemical Engineering, Tsinghua University, Beijing 100084, China;
    2. State Key Laboratory of Chemical Engineering and Low-Carbon Technology, Tsinghua University, Beijing 100084, China
  • 收稿日期:2025-02-13 修回日期:2025-05-18 接受日期:2025-05-20 出版日期:2025-11-28 发布日期:2025-06-30
  • 通讯作者: Tong Qiu,E-mail:qiutong@tsinghua.edu.cn
  • 基金资助:
    The authors gratefully acknowledge the financial support from National Natural Science Foundation of China (21991104).

MicroFlowSAM: A motion-prompted instance segmentation approach in microfluidics with zero annotation and training

Wenle Xu1,2, Lin Sheng1,2, Tong Qiu1,2, Kai Wang1,2, Guangsheng Luo1,2   

  1. 1. Department of Chemical Engineering, Tsinghua University, Beijing 100084, China;
    2. State Key Laboratory of Chemical Engineering and Low-Carbon Technology, Tsinghua University, Beijing 100084, China
  • Received:2025-02-13 Revised:2025-05-18 Accepted:2025-05-20 Online:2025-11-28 Published:2025-06-30
  • Contact: Tong Qiu,E-mail:qiutong@tsinghua.edu.cn
  • Supported by:
    The authors gratefully acknowledge the financial support from National Natural Science Foundation of China (21991104).

摘要: Microdispersion technology is crucial for a variety of applications in both the chemical and biomedical fields. The precise and rapid characterization of microdroplets and microbubbles is essential for research as well as for optimizing and controlling industrial processes. Traditional methods often rely on time-consuming manual analysis. Although some deep learning-based computer vision methods have been proposed for automated identification and characterization, these approaches often rely on supervised learning, which requires labeled data for model training. This dependency on labeled data can be time-consuming and expensive, especially when working with large and complex datasets. To address these challenges, we propose MicroFlowSAM, an innovative, motion-prompted, annotation-free, and training-free instance segmentation approach. By utilizing motion of microdroplets and microbubbles as prompts, our method directs large-scale vision models to perform accurate instance segmentation without the need for annotated data or model training. This approach eliminates the need for human intervention in data labeling and reduces computational costs, significantly streamlining the data analysis process. We demonstrate the effectiveness of MicroFlowSAM across 12 diverse datasets, achieving outstanding segmentation results that are competitive with traditional methods. This novel approach not only accelerates the analysis process but also establishes a foundation for efficient process control and optimization in microfluidic applications. MicroFlowSAM represents a breakthrough in reducing the complexities and resource demands of instance segmentation, enabling faster insights and advancements in the microdispersion field.

关键词: Microfluidics, Microdispersion, Instance segmentation, Large vision model, Prompt engineering

Abstract: Microdispersion technology is crucial for a variety of applications in both the chemical and biomedical fields. The precise and rapid characterization of microdroplets and microbubbles is essential for research as well as for optimizing and controlling industrial processes. Traditional methods often rely on time-consuming manual analysis. Although some deep learning-based computer vision methods have been proposed for automated identification and characterization, these approaches often rely on supervised learning, which requires labeled data for model training. This dependency on labeled data can be time-consuming and expensive, especially when working with large and complex datasets. To address these challenges, we propose MicroFlowSAM, an innovative, motion-prompted, annotation-free, and training-free instance segmentation approach. By utilizing motion of microdroplets and microbubbles as prompts, our method directs large-scale vision models to perform accurate instance segmentation without the need for annotated data or model training. This approach eliminates the need for human intervention in data labeling and reduces computational costs, significantly streamlining the data analysis process. We demonstrate the effectiveness of MicroFlowSAM across 12 diverse datasets, achieving outstanding segmentation results that are competitive with traditional methods. This novel approach not only accelerates the analysis process but also establishes a foundation for efficient process control and optimization in microfluidic applications. MicroFlowSAM represents a breakthrough in reducing the complexities and resource demands of instance segmentation, enabling faster insights and advancements in the microdispersion field.

Key words: Microfluidics, Microdispersion, Instance segmentation, Large vision model, Prompt engineering