top of page

How Robots Interpret the Facetune Algorithm

Updated: Oct 22

 

Facetune was designed to be intuitive for people—to smooth skin, brighten eyes, and subtly “perfect” portraits. But what happens when a machine takes control of that same system?
Facetune was designed to be intuitive for people—to smooth skin, brighten eyes, and subtly “perfect” portraits. But what happens when a machine takes control of that same system?

This question sits at the heart of Gretchen Andrew’s Facetune Portraits. By exploring how robots interpret Facetune, the work exposes how deeply cultural assumptions are baked into algorithmic processes. The robot doesn’t simply “see” beauty; it sees patterns, hierarchies, and a coded set of values.

The Facetune Portraits ask a radical question: what happens when robots interpret the Facetune algorithm?

The experiment reveals more than technical outcomes—it shines a light on the semiotics of digital culture. If the machine sharpens certain features or erases others, it’s not acting independently. It’s reproducing the cultural ideals we’ve embedded in its code.


This is why the Facetune Portraits feel both uncanny and urgent. They remind us that technology isn’t neutral. It doesn’t just follow rules; it amplifies and enforces cultural scripts about perfection, identity, and worth.


By asking how a robot interprets Facetune, Andrew challenges us to ask bigger questions: How much of what we consider “natural” beauty is actually manufactured? And how much of that manufacturing is happening invisibly through code?


 
 
 

Comments


bottom of page