Patternshop: Editing Point Patterns by Image Manipulation - Supplemental Results

ACM Transactions on Graphics (Proceedings of SIGGRAPH), 2023
1Max-Planck-Institut für Informatik, 2University College London 3CNRS, LIX, Ecole Polytechnique, INRIA

Ab-initio point pattern design

Our method can design point patterns by editing L-channel (density map) and AB-channel (correlation map) of raster LAB images from scratch. The editing and synthesis framework is not limited by the category of images. Please click on the point pattern (right-most) to see the vector graphics version.


Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern

Edited input density and correlation (LAB)

Edited input density (L)

Edited input correlation (AB)

Edited output point pattern




Neural network aided point pattern design

We use the network trained on different categories of images (human faces, animal faces and LSUN churches) to reconstruct density and correlation maps, which are used for editing and synthesizing new point patterns. Please click on the point patterns to see the vector graphics version.


Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)




Point pattern expansion

We use the network trained on a tree cover density dataset to reconstruct density and correlation map from the following point distributions generated by our synthesis method. We show that we can reconstruct spatially-varying correlation and achieve automatic non-stationary point pattern expansion, while preserving the correlation. Please click on the point patterns to see the vector graphics version.


Input points

Network output

Editing (content-aware fill in Adobe Photoshop 2022)

Our synthesis

Input points

Network output

Editing (content-aware fill in Adobe Photoshop 2022)

Our synthesis

Input points

Network output

Editing (content-aware fill in Adobe Photoshop 2022)

Our synthesis

Input points

Network output

Editing (content-aware fill in Adobe Photoshop 2022)

Our synthesis

Input points

Network output

Editing (content-aware fill in Adobe Photoshop 2022)

Our synthesis




Image stippling reconstruction and editing (from existing methods)

We use the network trained on faces to reconstruct density and correlation map. The test input comes from existing stippling methods including Zhou et al. [2012] and Salaun et al. [2022]. This shows that we can edit the legacy point patterns to have spatially-varying correlations, compared with the original input stippling. Please click on the point patterns to see the vector graphics version.


Input (Zhou et al.)

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input (Salaun et al.)

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)




Image stippling reconstruction (identity edit)

We use the network trained on faces to reconstruct density and correlation map, which is used for re-synthesis without editing. Please click on the point patterns to see the vector graphics version.


Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)

Input points

Network output

Editing

Our synthesis

Network output (L)

Network output (AB)

Editing (L)

Editing (AB)