- Joined
- Aug 19, 2023
- Messages
- 911
- Reaction score
- 39,934
- Points
- 93
- Thread Author
- #1
Videos requirements
* High resolution (4k webm is best, lower than 1080p is not recommended
* Faces not too far from camera and unobstructed
* Multiple angles, facial expressions
* Brightly and evenly lit
* Faces should somewhat match (beard, hat, hair, skin color, shape, glasses)
* Need at least 2 mins of good quality video, interview videos work well
Downloading Software
* Download DeepFaceLab
* Make sure to pick the right build for your GPU. If you don’t have a GPU, use the CLSSE build
* The downloaded .exe will extract and install the program to the location of your choosing.
* A workspace folder will be created. This is the folder where all the action will happen.
Extracting faces from source video
* Name the source video data_src and place it in the \workspace folder.
* Most formats that ffmpeg supports will work
* Run 2) extract images from video data_src
* Use PNG (better quality)
* FPS <= 10 that gets you at least 2000 images (4k-6k is ideal)
* Run 4) data_src extract faces S3FD best GPU
* Extracted faces saved to data_src\aligned.
* Run 4.2.2) data_src sort by similar histogram
* Groups similar detected faces together
* Run 4.1) data_src check result
* Delete faces that are not the right person, super blurry, cut off, upside down or sideways, or obstructed
* Run 4.2.other) data_src util add landmarks debug images
* New images with _debug suffix are created in data_src/aligned which allow you to see the detected facial landmarks
* Look for faces where landmarks are misaligned and delete the _debug and original images for those
* Once you’re done, delete all _debug images by using the search bar to filter for _debug
* Run 4.2.6) data_src sort by final
* Choose a target image number around 90% of your total faces
Extracting faces from destination video
* Name your final video data_dst and put it in the \workspace folder
* Run 3.2) extract PNG from video data_dst FULL FPS
* Run 5) data_dst extract faces S3FD best GPU
* Run 5.2) data_dst sort by similar histogram
* Run 5.1) data_dst check results
* Delete all faces that are not the target face to swap, or are the target face but upside down or sideways. Every face that you leave in will be swapped in the final video.
* Run 5.1) data_dst check results debug
* Delete any faces that are not correctly aligned or missing alignment, paying special attention to the jawline. We will manually align these frames in the next step.
* Run 5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG
* We run this step to manually align frames that we deleted in the last step. The manually aligned faces will be automatically extracted and used for converting. You must manually align frames you want converted (swapped) even if it’s a lot of work. If you fail to do so, your swap will use the original face for those frames.
* Manual alignment instructions:
* For each face, move your cursor around until it aligns correctly onto the face
* If it’s not aligning, use the mouse scroll wheel / zoom to change the size of the boxes
* When alignment is correct, hit enter
* Go back and forth with , and .. If you don’t want to align a frame just skip it with .
* Mouse left click will lock/unlock landmarks. You can either lock it by clicking or hitting enter.
Training
* Run 6) train SAEHD
* You will need to run this for a long time to get good quality deepfake, keep checking previews to see how good it is and until your satisfied
Convert
* Run 7) convert SAEHD
* While conversion is running, you can preview the final images data_dst\merged folder to make sure it’s correct. If it’s not, just close the convert window, delete /merged and start conversion again.
* Run 8) converted to mp4
* Bitrate of 3-8 is sufficient for most
* And your done
Here are some YouTube tutorials if its easier for you to follow:
*
*
*
Good luck with ur deepfakes and don't forget to leave a like
* High resolution (4k webm is best, lower than 1080p is not recommended
* Faces not too far from camera and unobstructed
* Multiple angles, facial expressions
* Brightly and evenly lit
* Faces should somewhat match (beard, hat, hair, skin color, shape, glasses)
* Need at least 2 mins of good quality video, interview videos work well
Downloading Software
* Download DeepFaceLab
* Make sure to pick the right build for your GPU. If you don’t have a GPU, use the CLSSE build
* The downloaded .exe will extract and install the program to the location of your choosing.
* A workspace folder will be created. This is the folder where all the action will happen.
Extracting faces from source video
* Name the source video data_src and place it in the \workspace folder.
* Most formats that ffmpeg supports will work
* Run 2) extract images from video data_src
* Use PNG (better quality)
* FPS <= 10 that gets you at least 2000 images (4k-6k is ideal)
* Run 4) data_src extract faces S3FD best GPU
* Extracted faces saved to data_src\aligned.
* Run 4.2.2) data_src sort by similar histogram
* Groups similar detected faces together
* Run 4.1) data_src check result
* Delete faces that are not the right person, super blurry, cut off, upside down or sideways, or obstructed
* Run 4.2.other) data_src util add landmarks debug images
* New images with _debug suffix are created in data_src/aligned which allow you to see the detected facial landmarks
* Look for faces where landmarks are misaligned and delete the _debug and original images for those
* Once you’re done, delete all _debug images by using the search bar to filter for _debug
* Run 4.2.6) data_src sort by final
* Choose a target image number around 90% of your total faces
Extracting faces from destination video
* Name your final video data_dst and put it in the \workspace folder
* Run 3.2) extract PNG from video data_dst FULL FPS
* Run 5) data_dst extract faces S3FD best GPU
* Run 5.2) data_dst sort by similar histogram
* Run 5.1) data_dst check results
* Delete all faces that are not the target face to swap, or are the target face but upside down or sideways. Every face that you leave in will be swapped in the final video.
* Run 5.1) data_dst check results debug
* Delete any faces that are not correctly aligned or missing alignment, paying special attention to the jawline. We will manually align these frames in the next step.
* Run 5) data_dst extract faces MANUAL RE-EXTRACT DELETED RESULTS DEBUG
* We run this step to manually align frames that we deleted in the last step. The manually aligned faces will be automatically extracted and used for converting. You must manually align frames you want converted (swapped) even if it’s a lot of work. If you fail to do so, your swap will use the original face for those frames.
* Manual alignment instructions:
* For each face, move your cursor around until it aligns correctly onto the face
* If it’s not aligning, use the mouse scroll wheel / zoom to change the size of the boxes
* When alignment is correct, hit enter
* Go back and forth with , and .. If you don’t want to align a frame just skip it with .
* Mouse left click will lock/unlock landmarks. You can either lock it by clicking or hitting enter.
Training
* Run 6) train SAEHD
* You will need to run this for a long time to get good quality deepfake, keep checking previews to see how good it is and until your satisfied
Convert
* Run 7) convert SAEHD
* While conversion is running, you can preview the final images data_dst\merged folder to make sure it’s correct. If it’s not, just close the convert window, delete /merged and start conversion again.
* Run 8) converted to mp4
* Bitrate of 3-8 is sufficient for most
* And your done
Here are some YouTube tutorials if its easier for you to follow:
*