About this deal
ModelScope Library provides the foundation for building the model-ecosystem of ModelScope, including the interface and implementation to integrate various models into ModelScope. Human parsing model M2FP: https://modelscope.cn/models/damo/cv_resnet101_image-multiple-human-parsing Step5 clone facechain from github GIT_LFS_SKIP_SMUDGE = 1 git clone https://github.com/modelscope/facechain.git --depth 1 cd facechain
When inferring, please edit the code in run_inference.py: # Fill in the folder of the images after preprocessing above, it should be the same as during training processed_dir = './processed' # The number of images to generate in inference num_generate = 5 # The stable diffusion base model used in training, no need to be changed base_model = 'ly261666/cv_portrait_model' # The version number of this base model, no need to be changed revision = 'v2.0' # This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed base_model_sub_dir = 'film/film' # The folder where the model weights stored after training, it must be the same as during training train_output_dir = './output' # Specify a folder to save the generated images, this parameter can be modified as needed output_dir = './generated' film/film: This base model may contains multiple subdirectories of different styles, currently we use film/film, no need to be changed processed: The folder of the processed images after preprocessing, this parameter needs to be passed the same value in inference, no need to be changed imgs: This parameter needs to be replaced with the actual value. It means a local file directory that contains the original photos used for training and generation Add validate & ensemble for Lora training, and InpaintTab(hide in gradio for now). (August 28th, 2023 UTC)
Help
FaceChain is a deep-learning toolchain for generating your Digital-Twin. With a minimum of 1 portrait-photo, you can create a Digital-Twin of your own and start generating personal portraits in different settings (multiple styles now supported!). You may train your Digital-Twin model and generate photos via FaceChain's Python scripts, or via the familiar Gradio interface, or via sd webui. Support a series of new style models in a plug-and-play fashion. Refer to: Features (August 16th, 2023 UTC) Support super resolution🔥🔥🔥, provide multiple resolution choice (512 512, 768768, 1024 1024, 20482048). (November 13th, 2023 UTC)
Step1: 我的notebook -> PAI-DSW -> GPU环境 # Step2: Open the Terminal,clone FaceChain from github: GIT_LFS_SKIP_SMUDGE = 1 git clone https://github.com/modelscope/facechain.git --depth 1 # Step3: Entry the Notebook cell: Face detection model DamoFD: https://modelscope.cn/models/damo/cv_ddsar_face-detection_iclr23-damofd ly261666/cv_portrait_model: The stable diffusion base model of the ModelScope model hub, which will be used for training, no need to be changed. Note: FaceChain currently assume single-GPU, if your environment has multiple GPU, please use the following instead: # CUDA_VISIBLE_DEVICES=0 python3 app.py # Step6 The ModelScope notebook has a free tier that allows you to run the FaceChain application, refer to ModelScope Notebook In addition to ModelScope notebook and ECS, I would suggest that we add that user may also start DSW instance with the option of ModelScope (GPU) image, to create a ready-to-use environment.You can find the generated personal digital image photos in the output_dir. Algorithm Introduction Architectural Overview Face quality assessment FQA: https://modelscope.cn/models/damo/cv_manual_face-quality-assessment_fqa Colab notebook is available now! You can experience FaceChain directly with our Colab Notebook. (August 15th, 2023 UTC)