Are machines still dreaming of electrical sheep? Or have they moved on to replace human workers?
The 22 of september, Reuters informed that:
« Russian firm Deepcake uses an artificial neural network to impose Bruce Willis’ image onto the face of a Russian actor of similar age and appearance. The digital replica received consent from the real Bruce Willis »
It has been now revealed that the actor, has not, in fact, authorized this usage of his likeness.
Give or take, the technical tools allowing for this kind of transformation are already being exploited. One can only wonder about what is military research doing now…
Maria Chmir, CEO of Deepcake: «I suppose that in filmmaking in the future there will be a division – there will be a content created with the usage of digital people, digital replicas, i-replicas, they can be called differently and with alive, authentic, protein-based as we in neural network world say, characters. »
Do we, as protein-based characters of this world-Theater, own any right to our faces ?
Can anyone use, reproduce, hide, expose, transform, my appearance without my consent and knowledge?
Who can validate, authenticate my face? My family and friends? My coworkers? The people who print passports? The owners of servers providing storage for surveillance cameras?
In days where the human body is a war zone, this is no small issue. The banalisation of the ownership of identity and the commoditization of image are showing their effects. A ground zero for express imagination where scalable initiatives are bound to flourish.
In the screen we are all the same, after all. Since the early days of cinema the possibility of representing impossible, fictional characters from the European repertoire was grasped. Nowadays the highly refined technical mediums of production have been commoditized, and they are widely available to use on publicly accessible content delivery networks.
The is no real difference between a binary or a protein based character because they both coexist in the same representational model; an aspect of the protein-based character has been captured using optical and audio tools, it has been translated into electrical current and transported over network equipment. We are getting used to being virtual since there are letters, or maybe since before that. The telephone works already on the assumption that a physical medium can transport aspects of a human being, and has opened the possibility for distant interactions, for the fluidity of presences.
Who can guarantee that our real world identities are not banalized at the same time our virtual presences seem to be bound to their circulation thanks to the digital revolution of gouvernement and the increasing, non-stopping automation of processes all over the services catalog. Education, Justice, health, social interaction, banking, they all ask of us to comply is producing a verifiable version of ourselves, of our images. And increasingly the control and administration, even the development of the mediums of archiving, consulting and authenticating identities is an obscure process for people in general.
The political implications and consequences of this new status of the automatic image is yet to be understood. I think it will, as always, come as a realization after changes have already been in play. In the meantime, the question of automation lingers around, as we have already passed a symbolic line where technical progress could be stopped, to better reflect upon it in terms of social and environmental justice.
Things are really difficult for some people around the world. Machines seem to be thriving.
The pictures in this publication where generated using the prompt: “A depiction of the day an artificial intelligence replaced the human worker. A high quality, award winning journalistic photo. “
By Dall-e, an image generation model put together by OpenAI, a company funded by Elon Musk Sam Altman Ilya Sutskever Greg Brockman Wojciech Zaremba, John Schulman.