TensorFlow 2

download and install

  • root script
  • Anaconda
  • FFMPEG
  • CUDA 8.0
  • CUDNN
  • install the Anaconda, make sure you have tick the Add to PATH for an easier life for you
  • make a new folder, extract root script and FFMPEG in it
  • to be extract and use in root folder (face-swap)
  • patched align_images & merge_faces , extract to the \face-swap
  • test output of training , rename it to init.py
  • install
  • install tensorflow : pip install tensorflow or pip install tensorflow-gpu
  • install dlib : pip install dlib (if it doesn’t work, you might need to find and download the wheel and pip install the wheel)
  • install opencv
  • install keras: pip install keras
  • install scipy : pip install scipy
  • install numpy : pip install numpy
  • install h5py : pip install h5py
  • install matplotlib : pip install matplotlib
  • install tqdm: pip install tqdm
  • install skimage: pip install scikit-image

  • intro
  • get ready
  • run cmd, >cd [FACE-SWAP DIRECTORY HERE]
  • run in cmd, >activate tensorflow >python train.py
  • wait for several hours, stop by clicking on the faces window, and press Q
  • exit
  • moving on
  • collect tons of pictures of target (200+)
  • find the video that satisfies your imagination (POV view recommended)
  • convert the video to jpg by using FFMPEG or any video software run cmd, >cd [BIN DIRECTORY INSIDE OF THE EXTRACTED FFMPEG HERE] >ffmpeg -i file.mpg -r 1/1 $filename%d.jpg (changed file.mpg to your video name with its extensions, and $filename%d.jpg to any name eg riley%d.jpg)
  • put both images in separated folders
  • run cmd, >python align_images_masked.py [TARGET FOLDER NAME] >python align_images_masked.py [SOURCE FOLDER NAME]
  • copy both aligned folder in both target and source folder to \face-swap\data folder
  • change both trump and cage folder to trumpA and cageA, and renamed target and source folder to trump and cage
  • run the training again, wait several hours
  • am i done?
  • run cmd in the \face-swap folder >activate tensorflow >python init.py
  • after it done, new output folder can be access
  • have a look of the training data. if satisfied, proceed
  • run cmd >python merge_faces_masked.py [SOURCE FOLDER NAME]
  • a new aligned folder can be access in source folder
  • take a look at the merged faces
  • but, no gif?
  • go to GIFMaker , to convert jpg to gif. (it easier!)
  • GIF Animator is also available if you have 300 frames
  • DONE
  • kinda bored. need some audio!
  • use ffmpeg, to convert gif to videoffmpeg -i try.gif -movflags faststart -pix_fmt yuv420p -vf “scale=trunc(iw/2)2:trunc(ih/2)2″ try.mp4
  • use any video software
    • add the source video (video + audio)
    • add the final deepfake video
    • align it correctly to the time/frames of the source video
    • delete the source video
    • render and voilà, now you have a new fake video with sound!

  • additional info
  • i did skip the face-alignment script, went through the merge_faces, but it turns out good
  • my data of the models : target-225 images, source-356 images. so i think it is better to have lots of the images of the target and source’s face
  • try to not snap faces through VLC, it just wasting your time. just convert from video to jpg using ffmpeg (details in the plus section)
  • i’m using windows, anaconda and the original script from reddit (not from the repo). you might well having problems too in this project, so try to ask for solution
  • you still can use the align_images_masked to get a nice, aligned, 64×64, of both faces using this (just copy the aligned photo to the face-swap\data and rename it)

  • plus
  • if the final output of merge_faces is not the target face, try to change the decoder,
  • from
  • decoder_A.load_weights( “models/decoder_A.h5” ), decoder_B.load_weights( “models/decoder_B.h5” ) to decoder_A.load_weights( “models/decoder_B.h5” ), decoder_B.load_weights( “models/decoder_A.h5” )
  • using 2GB of GPU? try this:
  • in model.py, change
  • ENCODER_DIM = 512
  • OR
  • in train.py, change
  • batch_size = 32
    OR if epoch % 50 == 0:
    OR figure = figure.reshape( (1,2) + figure.shape[1:] )
  • i didnt manage to use this script, but you can try it source
  • in models.py
  • def Decoder(): input_ = Input(shape=(4, 4, 512)) x = input_ x = upscale(256)(x) x = upscale(128)(x) x = upscale(64)(x) # additonal convolutional layers x = Conv2D(3, kernel_size=5, padding='same')(x) x = LeakyReLU(0.1)(x) x = Conv2D(3, kernel_size=5, padding='same')(x) x = LeakyReLU(0.1)(x) x = Conv2D(3, kernel_size=5, padding='same', activation='sigmoid')(x) return Model(input_, x)
  • underestimate yourself to do this whole deepfakes project? or try to not to
  • my laptop specs: 4GB of RAM 2GB of GPU
  • using mobile data internet, so cost me some money to buy data
  • I explored this project since 25 December, and finally done it (with some help from deepfakes community). So with zero knowledge at first, still managed to complete it in 5 days XD
  • and mostly, i’m just a normal university student, no experience in coding whatsoever, but still managed to get the final result of this project

  • try to not request-on-demand from reddit community to produce a gif of model for you
  • many thanks to deepfakes and community for this project, gain lots of new knowledge in python and more
  • HAPPY NEW YEAR!

https://www.reddit.com/r/deepfakes/comments/7n6mly/new_here_read_this_windows_x_anaconda_python_x/


Warning: count(): Parameter must be an array or an object that implements Countable in /home/arch/archsds/public_html/01/wp-includes/class-wp-comment-query.php on line 405

Leave a Reply

Your email address will not be published. Required fields are marked *