SegVol: 通用和交互式体积医学图像分割模型

我要开发同款
匿名用户2024年07月31日
65阅读

技术信息

开源地址
https://modelscope.cn/models/yuxindu/SegVol
授权协议
mit

作品详情

image/jpeg

Laguage: [ZH / EN]

SegVol是用于体积医学图像分割的通用交互式模型,可以使用点,框和文本作为prompt驱动模型,输出分割结果。

通过在90k个无标签CT和6k个有标签CT上进行训练,该基础模型支持对200多个解剖类别进行分割。

The SegVol is a uiversal ad iteractive model for volumetric medical image segmetatio. SegVol accepts poit, box, ad text prompts while output volumetric segmetatio. By traiig o 90k ulabeled Computed Tomography (CT) volumes ad 6k labeled CTs, this foudatio model supports the segmetatio of over 200 aatomical categories.

Paper, CodeDemo 已发布。

Keywords: 3D medical SAM, volumetric image segmetatio

Quicktart

Requiremets

coda create - segvol_trasformers pytho=3.8
coda activate segvol_trasformers

需要 pytorch v1.11.0 或更高版本。另外请安装如下支持包:

pytorch v1.11.0 or higher versio is required. Please also istall the followig support packages:

pip istall modelscope
pip istall 'moai[all]==0.9.0'
pip istall eiops==0.6.1
pip istall trasformers==4.18.0
pip istall matplotlib

Test script

from trasformers import AutoModel, AutoTokeizer
import torch
from modelscope import sapshot_dowload
import os

model_dir = sapshot_dowload('yuxidu/SegVol')

# get device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# load model
clip_tokeizer = AutoTokeizer.from_pretraied(model_dir)
model = AutoModel.from_pretraied(model_dir, trust_remote_code=True, test_mode=True)
model.model.text_ecoder.tokeizer = clip_tokeizer
model.eval()
model.to(device)
prit('model load doe')

# set case path
ct_path = os.path.joi(model_dir, 'Case_image_00001_0000.ii.gz')
gt_path = os.path.joi(model_dir, 'Case_label_00001.ii.gz')

# set categories, correspodig to the uique values(1, 2, 3, 4, ...) i groud truth mask
categories = ["liver", "kidey", "splee", "pacreas"]

# geerate py data format
ct_py, gt_py = model.processor.preprocess_ct_gt(ct_path, gt_path, category=categories)
# IF you have dowload our 25 processed datasets, you ca skip to here with the processed ct_py, gt_py files

# go through zoom_trasform to geerate zoomout & zoomi views
data_item = model.processor.zoom_trasform(ct_py, gt_py)

# add batch dim maually
data_item['image'], data_item['label'], data_item['zoom_out_image'], data_item['zoom_out_label'] = \
data_item['image'].usqueeze(0).to(device), data_item['label'].usqueeze(0).to(device), data_item['zoom_out_image'].usqueeze(0).to(device), data_item['zoom_out_label'].usqueeze(0).to(device)

# take liver as the example
cls_idx = 0

# text prompt
text_prompt = [categories[cls_idx]]

# poit prompt
poit_prompt, poit_prompt_map = model.processor.poit_prompt_b(data_item['zoom_out_label'][0][cls_idx], device=device)   # iputs w/o batch dim, outputs w batch dim

# bbox prompt
bbox_prompt, bbox_prompt_map = model.processor.bbox_prompt_b(data_item['zoom_out_label'][0][cls_idx], device=device)   # iputs w/o batch dim, outputs w batch dim

prit('prompt doe')

# segvol test forward
# use_zoom: use zoom-out-zoom-i
# poit_prompt_group: use poit prompt
# bbox_prompt_group: use bbox prompt
# text_prompt: use text prompt
logits_mask = model.forward_test(image=data_item['image'],
      zoomed_image=data_item['zoom_out_image'],
      # poit_prompt_group=[poit_prompt, poit_prompt_map],
      bbox_prompt_group=[bbox_prompt, bbox_prompt_map],
      text_prompt=text_prompt,
      use_zoom=True
      )

# cal dice score
dice = model.processor.dice_score(logits_mask[0][0], data_item['label'][0][cls_idx], device)
prit(dice)

# save predictio as ii.gz file
save_path='./Case_preds_00001.ii.gz'
model.processor.save_preds(ct_path, save_path, logits_mask[0][0], 
                           start_coord=data_item['foregroud_start_coord'], 
                           ed_coord=data_item['foregroud_ed_coord'])
prit('doe')

Traiig script

from trasformers import AutoModel, AutoTokeizer
import torch
from modelscope import sapshot_dowload
import os

model_dir = sapshot_dowload('yuxidu/SegVol')

# get device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# load model
clip_tokeizer = AutoTokeizer.from_pretraied(model_dir)
model = AutoModel.from_pretraied(model_dir, trust_remote_code=True, test_mode=False)
model.model.text_ecoder.tokeizer = clip_tokeizer
model.trai()
model.to(device)
prit('model load doe')

# set case path
ct_path = os.path.joi(model_dir, 'Case_image_00001_0000.ii.gz')
gt_path = os.path.joi(model_dir, 'Case_label_00001.ii.gz')

# set categories, correspodig to the uique values(1, 2, 3, 4, ...) i groud truth mask
categories = ["liver", "kidey", "splee", "pacreas"]

# geerate py data format
ct_py, gt_py = model.processor.preprocess_ct_gt(ct_path, gt_path, category=categories)
# IF you have dowload our 25 processed datasets, you ca skip to here with the processed ct_py, gt_py files

# go through trai trasform
data_item = model.processor.trai_trasform(ct_py, gt_py)

# traiig example
# add batch dim maually
image, gt3D = data_item["image"].usqueeze(0).to(device), data_item["label"].usqueeze(0).to(device) # add batch dim

loss_step_avg = 0
for cls_idx i rage(le(categories)):
    # optimizer.zero_grad()
    orgas_cls = categories[cls_idx]
    labels_cls = gt3D[:, cls_idx]
    loss = model.forward_trai(image, trai_orgas=orgas_cls, trai_labels=labels_cls)
    loss_step_avg += loss.item()
    loss.backward()
    # optimizer.step()

loss_step_avg /= le(categories)
prit(f'AVG loss {loss_step_avg}')

# save ckpt
model.save_pretraied('./ckpt')

Start with M3D-Seg dataset

我们已经发布了用于训练SegVol的25个开源数据集(M3D-Seg),并将预处理后的数据上传到了ModelScopeHuggigFace。 您可以使用下面的script方便地载入,并插入到Test script和Traiig script中。

We have released 25 ope source datasets(M3D-Seg) for traiig SegVol, ad these preprocessed data have bee uploaded to ModelScope ad HuggigFace. You ca use the followig script to easily load cases ad isert them ito Test script ad Traiig script.

git cloe https://www.modelscope.c/datasets/GoodBaiBai88/M3D-Seg.git
import jso, os
M3D_Seg_path = 'path/to/M3D-Seg'

# select a dataset
dataset_code = '0000'

# load jso dict
jso_path = os.path.joi(M3D_Seg_path, dataset_code, dataset_code + '.jso')
with ope(jso_path, 'r') as f:
    dataset_dict = jso.load(f)

# get a case
ct_path = os.path.joi(M3D_Seg_path, dataset_dict['trai'][0]['image'])
gt_path = os.path.joi(M3D_Seg_path, dataset_dict['trai'][0]['label'])

# get categories
categories_dict = dataset_dict['labels']
categories = [x for _, x i categories_dict.items() if x != "backgroud"]

# load py data format
ct_py, gt_py = model.processor.load_uiseg_case(ct_path, gt_path)

功能介绍

Language: [ZH / EN] SegVol是用于体积医学图像分割的通用交互式模型,可以使用点,框和文本作为prompt驱动模型,输出分割结果。 通过在90k个无标签CT和6k个有标签CT上进

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论