Previous Page  36 / 48 Next Page
Information
Show Menu
Previous Page 36 / 48 Next Page
Page Background

35

Figure 2. The CEPROQHA team capturing various cultural assets at the Museum of Islamic Art (Doha, Dec. 2018)

The project team designed and implemented several innovative, A.I.-based techniques that aim at promoting and increasing the

attractiveness of cultural assets. These techniques leverage recent advances and developments in deep learning, computer vision,

natural language processing, and optimization.

Multitask Art and Culture Classification

Our first approach was designed to enrich cultural assets through metadata completion. This approach is innovative in contrast

to traditional captioning approaches. It uses multitask deep convolutional networks with hard parameters sharing, leveraging

the correlations that exist between the output labels. This resulted in an efficient model that demonstrated a higher

performance in comparison with traditional models.

Figure 3. Multitask Classification model architecture

Multimodal Art and Culture Classification

Our second approach to enrich annotated cultural assets, uses multimodal machine learning. A multimodal deep neural

network inputs multiple types of data, (validated on visual and textual features) and outputs missing data. The network uses

hard parameter sharing to encode a more accurate representation of the asset in the assigned feature space. Experimental

results show that this approach outperforms single input models. This is validated by the fact that deep learning-based

techniques benefit from more data and better representations.