What is Waifu?

Please throw some money my way either directly VIA PayPal or with my Ko-Fi maybe don’t do my Patreon until I fully set it up.

Waifu2x-Caffe is a deep learning system for upscaling images. According to the source of Waifu2x’s documentation it is possible to train your own models for use in this.

Train Your Own Model

Note1: If you have cuDNN library, you can use cudnn kernel with -backend cudnn option. And, you can convert trained cudnn model to cunn model with tools/rebuild.lua.

Note2: The command that was used to train for waifu2x’s pretraind models is available at appendix/train_upconv_7_art.sh, appendix/train_upconv_7_photo.sh. Maybe it is helpful.

Data Preparation

Genrating a file list.

find /path/to/image/dir -name "*.png" > data/image_list.txt

You should use noise free images. In my case, waifu2x is trained with 6000 high-resolution-noise-free-PNG images.

Converting training data.

th convert_data.lua

Train a Noise Reduction(level1) model

mkdir models/my_model
th train.lua -model_dir models/my_model -method noise -noise_level 1 -test images/miku_noisy.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise -noise_level 1 -i images/miku_noisy.png -o output.png

You can check the performance of model with models/my_model/noise1_best.png.

Train a Noise Reduction(level2) model

th train.lua -model_dir models/my_model -method noise -noise_level 2 -test images/miku_noisy.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise -noise_level 2 -i images/miku_noisy.png -o output.png

You can check the performance of model with models/my_model/noise2_best.png.

Train a 2x UpScaling model

th train.lua -model upconv_7 -model_dir models/my_model -method scale -scale 2 -test images/miku_small.png
# usage
th waifu2x.lua -model_dir models/my_model -m scale -scale 2 -i images/miku_small.png -o output.png

You can check the performance of model with models/my_model/scale2.0x_best.png.

Train a 2x and noise reduction fusion model

th train.lua -model upconv_7 -model_dir models/my_model -method noise_scale -scale 2 -noise_level 1 -test images/miku_small.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise_scale -scale 2 -noise_level 1 -i images/miku_small.png -o output.png

You can check the performance of model with models/my_model/noise1_scale2.0x_best.png.

My Theory

Now I have a theory that if you trained the model on specific games and figured out how to use Reshade or some other utility similar to this to inject a shader pass of the deep learning resizing of Waifu2x you could have something very similar to what DLSS has to offer but without requiring an RTX card. I do have to note that since I have no idea how to do any of this my estimations are based on generic models. There is also the issue where the program I’m using saves to a file and doesn’t just process everything internally to the graphics pipeline nor does it do any edge detection which could speed things up considerably.

The Results

The Native Render Quality

Deus Ex Mankind Divided at 900P (native)

Deus Ex Mankind Divided at 1080P (native)

Deus Ex Mankind Divided at 1440P (native)

Deus Ex Mankind Divided at 4K (native)

900P Upsamples

Deus Ex Mankind Divided at 1440P upscaled from 900P

900P to 1440P - 1.60x Successfully converted Used Processor: cuDNN Processing time: 00:00:00.592 Initialization time: 00:00:00.038 cuDNN-check time: 00:00:00.000

Deus Ex Mankind Divided at 4K upscaled from 900P

900P to 4k - 2.40x Successfully converted Used Processor: cuDNN Processing time: 00:00:02.318 Initialization time: 00:00:01.596 cuDNN-check time: 00:00:01.952

1080P Upsamples

Deus Ex Mankind Divided at 1440P upscaled from 1080P

1080P to 1440P - 1.3332x Successfully converted Used Processor: cuDNN Processing time: 00:00:00.752 Initialization time: 00:00:00.029 cuDNN-check time: 00:00:00.000

Deus Ex Mankind Divided at 4K upscaled from 1080P

1080P to 4k - 2x Successfully converted Used Processor: cuDNN Processing time: 00:00:00.875 Initialization time: 00:00:00.026 cuDNN-check time: 00:00:00.000

Real World News

It looks like some things have been brewing at Radeon in relation with something like this. According to 4gamer Radeon is working with the DirectML framework to bring exciting new technologies to their GPUs. Adam Kozak (Senior Manager, GPU Product Market, AMD) said “…By the way, Radeon VII scored about 1.62 times the “GeForce RTX 2080” in “Luxmark” which utilizes OpenCL-based GPGPU-like ray tracing renderer. Based on these facts, I think NVIDIA’s DLSS-like thing can be done with GPGPU-like approach for our GPU.“. This is great news but its old news if you think about it. The whole entire DirectML and DXR framework are a thing that isn’t really locked to hardware. You can read more for yourself here:

Hopefully on Nvidia’s side they decide to bring full driver support backported to at least Pascal and at most Maxwell.