Images of landscapes that together look like one continuous scenery are projected onto screens on three walls of the exhibition space. The work is realized by analyzing photographs found in Google Street View, selecting three pictures each that look similar and connectable, and joining them together horizontally. Projecting one photo each onto the three screens creates an impression as if these places that in reality are far away from each other are part of the same landscape. The results can be considered as imaginary landscapes created through machine learning algorithm.
At the same time, fictional soundscapes that supposedly correspond with these landscapes combining different locations are made using artificial neural networks that have acquired the relationship between sound and vision based on large amounts of video data. By superimposing these imaginary soundscapes onto imaginary landscapes, the artist creates multilayered environments that don’t exist anywhere, and therefore are entirely virtual both visually and acoustically.
For example, there may be imaginary landscapes created by combining contrasting photographs of such places as a busy downtown street in a big city, run-down buildings in a slum, and pictures of trees in natural forests or man-made parks. These differences and contrasts expose issues of polarization and other social problems the world is facing today, while on the other hand, there may also be examples of uniformity that can be detected in different landscapes. This work represents the artist’s attempt to paint landscape pictures that project such circumstances of the world.
Concept / Machine Learning: TOKUI Nao
Lead Programmer: Robin JUNGERS
Assistant Programmer: FUJINAMI Hidemaro
Machine Learning: KAJIHARA Yuma