Skip to content

DFA parallelization  #18

@cangozpi

Description

@cangozpi

Hi, I was going through your implementation for Direct Feedback Alignement (DFA). As far as I can understand, you used register_backward_hook function to send the global error (e) to individual layers to calculate their corresponding local gradient approximations (by using backward weights, B). But does your implementation support parallelization ? I couldn't find if you supported this property of DFA algorithm. I believe using backward hooks to propagate errors works but it is still sequentially done like it was done for backpropagation so we cannot benefit from parallelization. Thank you for your time.
Regards

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions