Search
Results
Cross-Attention in Transformer Architecture
[https://vaclavkosar.com/ml/cross-attention-in-transformer-architecture] - - public:isaac
Merge two embedding sequences regardless of modality, e.g., image with text in Stable Diffusion U-Net with encoder-decoder attention.
Attending to Attention and Intention
'Inattention blindness' due to brain load
[http://www.ucl.ac.uk/news/news-articles/1207/17072012-Inattention-blindness-due-to-brain-load-Lavie] - - public:time