Week 5 - Synthetic Media Group Project

Erin & Zoe

In the very beginning of this project, we discussed lots of ideas that coud possibly turned into something worth thinking over and over.

  • Dating app profile changing - get different matches: real / fake? how to distinguish
  • Tricking people to see if they know what’s real or fake?
  • What factors are affecting the decision?
  • OPTIONS to create an avatar of yourself would you consent now to let your identity be used in the future?

I also think of the way that how we can trick the ai recognition about detecting objects in relation to our human manipulations:“When we use machine learning to detect and identify the objects we saw, it immediately recognizes the most simple and common objects we normally used. What if we messed around the detection, to print the objects on paper? Or using different objects to overlay and build the outline with completely different materials? ”

We sums up the concepts into three different areas:

-Model: Generate sentence descriptions of images
See how we can stretch the functionality of this model. What if we print a photo and hold it up to the camera? How detailed of a drawing do we have to make before it detects what we’re drawing. Putting clothes on a chair to see if the model will detect it as a person.

-Data moshing (apps likeMoshUp)
Distorting reality using AI, messing with physical laws that our brains rely on. It feels eerie when it transitions to something else, and then you feel comfortable again when the next image shows up, until it changes again.
Journey of a day? Body parts?
Outcome idea: two channels - one is recording normal humans action, cooking, walking, series of tasks - one is same series of actions but strangely presenting

-Can people tell what’s real and fake? (poem generators, facegenerator, pizza generator, landscape generator)
Showing people something made by AI (like a poem) and ask for their interpretation, reveal it was AI after.
Showing a picture of generated person and asking questions
Have to be careful when asking because we don’t want to prime people or give it away- “do you see something wrong with this” vs. asking “Tell me what you think this person’s story is”

With all the concepts and ideas established in our minds, we thought of a way that we should treated AI as an third partner in our project (and AI already is our third partner in someway in everyones’ daily lives as well), and we also tried to include others in this project with certain amount of informations as multiple new inputs for AI to generate multiple new outputs.

We write the scripts first, and send them to Ai which helping us to create the images for an original storyline, and then we gave the images to our participants, and they created new scripts accordingly each time to form a loop of new storylines.

1. We Write the scripts for our oginal storyline.
2. We feed our scipts to ai to create images.
3. We get the generative images as starting point of new storyline.
4. We give the images to the participant to write their storyline in mind.
5. We use the new scripts to feed ai and it create more images for the next new storyline.

And for the presentation method, I thought of makeing a short video by capturing the moments how these stories and scripts were being created, and how the participants were involved in this process, therefore I made a small storyboard about the format of how this video goes:

We documented the participants different scripts for the ai to generate new images.
We also tried different ways to communicate with the participants about the tasks they need to perform, and we also came up with one final conclusion that we should let the participants think and stare the images first for 30-40 seconds, and with clear instruction of “a storyline with clear starting point and end point”, and “with each image, just leave 1-2 sentences summarizing or creating a content based on what you think and see”.

Shout out to the computer! and runwayML software (even though you had plenty of bugs)
Here is out final video︎︎︎