We've seen the chaos wrought by "fake news," but as spoofing video becomes easier, we are rapidly approaching a time where it may be impossible to trust any kind of moving image. This could have serious ramifications for media, criminal justice, and privacy. But perhaps we're just in the midst of a learning curve — the same one we overcame with the rise of Photoshop. Sure, many people are fooled by fake photos, but we now have the means to sniff them out and call them out. Perhaps a deeper cultural question will arise when AI is not just faking linear videos, but building fake versions of ourselves that can answer emails, calls, and video chats exactly like we do. What kind of etiquette and social rules do we need when our artificial selves start interacting with loved ones, colleagues, and even each other?