Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Block' object has no attribute 'drop_path' #54

Open
zethriller opened this issue Oct 14, 2023 · 3 comments
Open

AttributeError: 'Block' object has no attribute 'drop_path' #54

zethriller opened this issue Oct 14, 2023 · 3 comments

Comments

@zethriller
Copy link

Note: this may be me not knowing how to use it, please explain if needed - this is a very basic test, haven't found either how to position foreground items.

Testing extension with background + 2 foreground characters
Model: dynavisionXL, image size 832x1216
Settings:
image

After generating correctly the background and the two foreground images, preview dissapears and an error shows up instead:
"AttributeError: 'Block' object has no attribute 'drop_path' "

Traceback:

Traceback (most recent call last):
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "F:\automatic1111\stable-diffusion-webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "F:\automatic1111\stable-diffusion-webui\extensions\multi-subject-render\scripts\multirender.py", line 267, in run
        foreground_image_mask = sdmg.calculate_depth_map_for_waifus(foreground_image)
      File "F:\automatic1111\stable-diffusion-webui\extensions/multi-subject-render/scripts/simple_depthmap.py", line 149, in calculate_depth_map_for_waifus
        prediction = model.forward(sample)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\dpt_depth.py", line 166, in forward
        return super().forward(x).squeeze(dim=1)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\dpt_depth.py", line 114, in forward
        layers = self.forward_transformer(self.pretrained, x)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 15, in forward_beit
        return forward_adapted_unflatten(pretrained, x, "forward_features")
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\utils.py", line 86, in forward_adapted_unflatten
        exec(f"glob = pretrained.model.{function_name}(x)")
      File "<string>", line 1, in <module>
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 125, in beit_forward_features
        x = blk(x, resolution, shared_rel_pos_bias=rel_pos_bias)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 102, in block_forward
        x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Block' object has no attribute 'drop_path'
@Ereshkigal0
Copy link

Having the exact same issue & haven't been able to solve it yet, would be nice if someone could chime in to tell us noobs what we're doing wrong, or if it's just broken lol

@Extraltodeus
Copy link
Owner

ouch I'm not using A1111 anymore and right now I'm not sure what went wrong. I'm sorry that you guys can't use it. I will take a look into it during the upcoming month if possible!

@getsmartt
Copy link

It does function with the Midas model, although I can not get a suitable image out of it, but the script appears to be broken with the other models.

@leomaxwell973
Copy link

Changing the lines of code at :
automatic1111\repositories\midas\midas\backbones\beit.py", line 102, in block_forward -

FROM

        x = x + self.drop_path(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path(self.mlp(self.norm2(x)))
    else:
        x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
                                                        shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))

TO:

        x = x + self.drop_path1(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path2(self.mlp(self.norm2(x)))
    else:
        x = x + self.drop_path1(self.gamma_1 * self.attn(self.norm1(x), resolution,
                                                        shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path2(self.gamma_2 * self.mlp(self.norm2(x)))

(manually adding path#s is really all.)

Seems to brute force a fix
however, it seems the brute forcing of it this way causes a memory leak as it will go from successful runs, to OOM exceptions immediately before it even renders 1 pre-image.

I honestly don't know a lot of python, just know how to follow traces and stacks while guessing syntax along the way, so no idea if the changes made are just bad, or if this is close to a solution or not.

another thought:
This is in the beit.py file, the model for midas beit, that is 512 (the big one.) perhaps changing to a different model (swin2) will resolve? thats on my todo at least if I cannot resolve beit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants