git clone https://github.com/duanjunwen/SecurityAIGC.git
cd SecurityAIGC
pip install -e .
We obtain non-synthetic data through the Multimodal Fake News Detection via CLIP-Guided.Learning['https://arxiv.org/abs/2205.14304] Please download the synthesis data from this link. link: https://pan.baidu.com/s/12jNXDQXXGeMqLigytlcMBA code: kd67 Then, place the completed downloaded dataset in path ./dataset/weibo and ./dataset/politifact respectively.
Download model
python ./script/download_pretrained_model.py
Set task_type as pretrain in config
# ./config/train/train_multi_mode.py
task_type = "pretrain"
Run MultiModel use qwen2.5-0.5B
python ./script/gnd/train_multi_mode_with_qwen.py ./config/train/gnd/train_multi_mode.py
Set task_type as finetune in config
# ./config/train/train_multi_mode.py
task_type = "finetune"
Run MultiModel use qwen2.5-0.5B
python ./script/gnd/train_multi_mode_with_qwen.py ./config/gnd/train/train_multi_mode.py
Set task_type as inference in config
# ./config/train/train_multi_mode.py
task_type = "inference"
Run MultiModel use qwen2.5-0.5B as zh-Han text encoder
python ./script/gnd/train_multi_mode_with_qwen.py ./config/gnd/train/train_multi_mode.py