Nowadays, many researchers claim that the self-attention mechanism generates a better performance regardless of circumstances. Having doubted such opinion, we examine different models with scalar dot-product self-attention mechanisms under various experimental settings to develop a comprehensive understanding of such technique. In conclusion, we have verified that the performance of the attention mechanism grows as the dataset input increases in size. In addition, we have observed that our attention layer can impact the model’s performance negatively on small datasets. Moreover, we demonstrate that the attention mechanism is model-dependent: opposite effects may be obtained on the same dataset with different model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.