Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

有关于LEVEL=1的patching #8

Open
SuperCewang opened this issue Apr 10, 2024 · 10 comments
Open

有关于LEVEL=1的patching #8

SuperCewang opened this issue Apr 10, 2024 · 10 comments

Comments

@SuperCewang
Copy link

如果我不适用tools里的big_to_small_patching.py,而是直接使用CLAM创建LEVEL=1的patching,那么SIZE应该设置为1024吗?并且,在readme里的命令中,特征提取部分--color_norm参数好像并没有定义

@liupei101
Copy link
Owner

直接使用CLAM创建LEVEL=1的patching,那么SIZE应该设置为1024吗?

SIZE指 LEVEL所在层的 Patch大小,所以设为256即可。


在readme里的命令中,特征提取部分--color_norm参数好像并没有定义

官方的CLAM库没有定义color_norm参数。我们改进的CLAM库提供了此参数,祥见代码

@SuperCewang
Copy link
Author

如果我使用官方的CLAM工具处理自己的数据。没有使用color_norm参数,会导致维度匹配不上吗?

@liupei101
Copy link
Owner

color_norm参数不影响维度匹配

@SuperCewang
Copy link
Author

您好,在运行main.py的过程中,在WSIpatchdata.py中出现bug,feats_x20.shape[0]和16*feats_x5.shape[0]不相等,我应该如何处理此问题?感谢您的指点

@liupei101
Copy link
Owner

有两个可能的原因:

  • 没有按照Big-to-small patching 的流程预处理数据。规范的Big-to-small patching流程:先从低分辨率(l=2)开始划分patch,然后在高分辨率(l=1)划分patch。详细流程见此repo的README。
  • 如果按照了上述流程预处理数据,那问题可能是你所处理数据的低分辨率patch并不对应16个高分辨率patch。对于TCGA或CAMELYON16,低分辨率(l=2)patch是对应16个高分辨率(l=1)patch的。这时需要查看您所处理数据中分辨率的对应关系。

@SuperCewang
Copy link
Author

您好!我在使用您改进过的CLAM进行特征提取的时候出现了问题,是AttributeError: module 'torchstain' has no attribute 'MacenkoNormalizer',我检查了我的torchstain的版本是1.3.0,请问我应该如何解决此问题?

@SuperCewang
Copy link
Author

您好!我使用在提取level=2的特征时一切顺利。但是在使用过Big-to-small patching后,进行Level=1的特征提取的时候,出现了错误,具体错误为:
Traceback (most recent call last):
File "extract_features_fp.py", line 197, in
output_file_path = compute_w_loader(h5_file_path, output_pt_path, wsi,
File "extract_features_fp.py", line 55, in compute_w_loader
for count, (batch, coords) in enumerate(loader):
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 631, in next
data = self._next_data()
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 675, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/public/home/wangsc/TCGA_RCC/Pipeline-Processing-TCGA-Slides-for-MIL-main/tools/CLAM/datasets/dataset_h5.py", line 199, in getitem
img = self.wsi.read_region(coord, self.patch_level, (self.patch_size, self.patch_size)).convert('RGB')
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/openslide/init.py", line 251, in read_region
region = lowlevel.read_region(
File "/public/home/wangsc/anaconda3/envs/DSCA/lib/python3.8/site-packages/openslide/lowlevel.py", line 335, in read_region
_read_region(slide, buf, x, y, level, w, h)
ctypes.ArgumentError: argument 3: <class 'TypeError'>: wrong type
我尝试了不使用big-to-small-patching好像并不会出现此问题,但是后面又会出现维度不匹配的现象,您可以帮我解决此问题吗?

@liupei101
Copy link
Owner

可能是因为坐标的数据类型不对,可以参见这个:#3

liupei101 added a commit that referenced this issue Apr 24, 2024
@SuperCewang
Copy link
Author

您好!我在进行main.py的运行时,出现了此问题:
Traceback (most recent call last):
File "main.py", line 101, in
multi_run_main(config)
File "main.py", line 34, in multi_run_main
metrics = model.exec()
File "/public/home/daijinpeng/wsc/DSCA-main/model/model_handler.py", line 127, in exec
self._run_training(train_loader, val_loaders=val_loaders, val_name=val_name, measure=True, save=False)
File "/public/home/daijinpeng/wsc/DSCA-main/model/model_handler.py", line 171, in _run_training
train_cltor, batch_avg_loss = self._train_each_epoch(train_loader)
File "/public/home/daijinpeng/wsc/DSCA-main/model/model_handler.py", line 227, in _train_each_epoch
y_hat = self.model(fx, fx5, cx5)
File "/public/home/daijinpeng/anaconda3/envs/yx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/public/home/daijinpeng/wsc/DSCA-main/model/HierNet.py", line 247, in forward
patchx20_emb, x20_x5_cross_attn, _ = self.patchx20_embedding_layer(x20, x5) # [B, 16N, d]->[B, N, d']
File "/public/home/daijinpeng/anaconda3/envs/yx/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(input, **kwargs)
File "/public/home/daijinpeng/wsc/DSCA-main/model/model_utils.py", line 298, in forward
x20, L = sequence2square(x20, self.scale) # [BN/(s^2), C, s, s]
File "/public/home/daijinpeng/wsc/DSCA-main/model/model_utils.py", line 88, in sequence2square
assert size[1] % (s * s) == 0
AssertionError
我调试发现,x的size是([1, 636, 1024]),其中s=4,对应不起来,请问我是哪里出现了问题吗?

@SuperCewang
Copy link
Author

我尝试了填充x的第二维度到640,这样L=640/16=40,但是到后面x5.shape[1]=159,又与L不匹配,这是因为我数据预处理没有使用use_padding参数的原因吗?
但是我看此参数默认是开启的。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants