Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd AI for Math Workshop @ ICML 2025

Learning Moderately Input-Sensitive Functions: A Case Study in QR Code Decoding

Kazuki Yoda · Kazuhiko Kawamoto · Hiroshi Kera


Abstract:

The hardness of learning a function that attains a target task relates to its input-sensitivity. For example, image classification tasks are input-insensitive as minor corruptions should not affect the classification results, whereas arithmetic and symbolic computation, which has been recently attracting interest, is highly input-sensitive as each input variable connects to the computation results. This study presents the first learning-based Quick Response (QR) code decoding and investigates learning functions of medium sensitivity.Our experiments reveal that Transformers can successfully decode QR codes, even beyond the theoretical error-correction limit, by learning the underlying structure of embedded texts. They generalize from English-rich training data to other languages and even random strings. Moreover, we observe that the Transformer-based QR decoder focuses on data bits while ignoring error-correction bits, suggesting a decoding mechanism distinct from standard QR code readers.

Chat is not available.