Assessment Methods
Quantitative Assessment
Assessment methods vary according to applications. Generally, the following methods are used to assess the four quantitative metrics.
- Python assessment script:
#!/usr/bin/env python from __future__ import print_function from netCDF4 import Dataset import numpy as np from scipy import stats import sys, getopt import argparse from WRF import getvar, interplevel parser = argparse.ArgumentParser(description='xxxxx.') #parser.add_argument('--key', '-k') parser.add_argument('file1') parser.add_argument('file2') args = parser.parse_args() try: f1 = Dataset(args.file1,'r') f2 = Dataset(args.file2,'r') except: print("file open failed") exit(2) # Calculate the elements. elements = ('T2', 'RAINNC') indexnumber = len(f1.variables[elements[0]]) for index in range(indexnumber): print ('%-24s %16s %16s %16s %16s' %('Var', 'Pearson', 'ME', 'MAE', 'RMSE')) for var in elements: var_split = var.split('-') if len(var_split) > 1: if var_split[0]=='wspd_wdir': var0_f1 = getvar(f1, var_split[0])[0,:] var0_f2 = getvar(f2, var_split[0])[0,:] else: var0_f1 = getvar(f1, var_split[0]) var0_f2 = getvar(f2, var_split[0]) var1_f1 = getvar(f1, var_split[1]) f1_para = interplevel(var0_f1, var1_f1, int(var_split[2]))[index] var1_f2 = getvar(f2, var_split[1]) f2_para = interplevel(var0_f2, var1_f2, int(var_split[2]))[index] else: f1_para = f1.variables[var][index] f2_para = f2.variables[var][index] # Calculate the SCC. f1_mean = np.mean(f1_para) f2_mean = np.mean(f2_para) divisor = np.sum((f1_para - f1_mean) * (f2_para - f2_mean)) f1_dividend = np.sqrt(np.sum((f1_para - f1_mean) ** 2.)) f2_dividend = np.sqrt(np.sum((f2_para - f2_mean) ** 2.)) if f1_dividend * f2_dividend == 0: pearson = 'nan' else: pearson = divisor / (f1_dividend * f2_dividend) # Calculate the ME. mean = np.mean(f1_para - f2_para) # Calculate the MAE. mae = np.mean(abs(f1_para - f2_para)) # Calculate the RMSE. rmse = np.sqrt(np.mean((f1_para - f2_para)**2)) print ('%-24s %10.6f %10.6f %10.6f %10.6f' %(var, pearson, mean, mae, rmse))
- diffwrf comparison tool:
It is a comparison tool built in the WRF application. After WRF compilation, the tool is generated in the external/io_netcdf directory. It can calculate the root mean square (RMS) and RMSE of each element in two result files and output inconsistent elements. If the values of RMSE in two result files are 0, there is no output. The following figure shows an example:

- MD5 verification code:
md5sum is a computer program that hashes file content to generate MD5 verification code, which is used to calculate and verify the 128-bit MD5 hash value described in RFC 1321. The MD5 value can be used as the digital fingerprint of a file. You can run the md5sum command to view the result files of the same computing test case at the same time on different platforms. If the MD5 values are the same, the result files are the same.
Graphical Assessment
The graphical assessment method uses graphical software such as NCAR Command Language (NCL) to draw the value distribution of meteorological elements in an area and allows observation of the result differences between different platforms.
- NCL graphical assessment
NCL is a free visualization software developed by the National Center for Atmospheric Research (NCAR) for scientific data computing. It has very powerful file input and output functions, and can read and write netCDF-3, netCDF-4 classic, HDF4, binary, ASCII and other data. You can use the ncl plugin of WRF to process the ncl file output by WRF for comparison graph generation.
NCL comparison graph generation script:
load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl" load "$NCARG_ROOT/lib/ncarg/nclscripts/wrf/WRFUserARW.ncl" load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_csm.ncl" load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/contributed.ncl" begin f=addfile("kunpeng/wrfout_d01_2020-07-08_00:00:00","r") f1=addfile("x86/wrfout_d01_2020-07-08_00:00:00","r") res = True ; Set up some basic plot resources res@MainTitle = "METGRID FILES" res@Footer = False res@cnFillOn=True res@gsnSpreadColors = True opts = res pltres=True mpres=True wks = gsn_open_wks("pdf","wrf") gsn_define_colormap(wks,"NCV_bright") ter = wrf_user_getvar(f,"RAINNC",0) contour = wrf_contour(f,wks,ter,res) plot = wrf_map_overlays(f,wks,(/contour/),pltres,mpres) ter1 = wrf_user_getvar(f1,"RAINNC",0) contour = wrf_contour(f1,wks,ter1,res) plot = wrf_map_overlays(f1,wks,(/contour/),pltres,mpres) print(ter-ter1) contour = wrf_contour(f,wks,ter-ter1,res) plot = wrf_map_overlays(f,wks,(/contour/),pltres,mpres) endFigure 1 NCL comparison graph example
- GrADS graphical assessment
Grid Analysis and Display System (GrADS) is a data processing and display software system widely used in the meteorological industry. The software system reads, processes, displays and prints meteorological data through its integrated environment.
You can use the grads script to process the grads files output by GrADS for comparison graph generation.
#!/bin/sh level=500 source ./env-grads.sh for v in {'h',} do value=$v file=$value'_at_'$level'.png' cp draw-all.gs.bak draw.gs sed -i "s/FILE/$file/g" draw.gs sed -i "s/LEVEL/$level/g" draw.gs sed -i "s/VALUE/$value/g" draw.gs grads -bpcx draw.gs donedraw.gs script:
'reinit' infile='../post.ctl_2018110112_192' outdir='./out' outfile=outdir'/w_at_925.png' level=925 value=w 'open 'infile 'query time' res=subwrd(result,3) 'clear' 'set lev 'level 'set gxout shade2b' 'd 'value 'cbarn' 'draw title 'value' at 'level' 'res 'set grid off' 'set grads off' 'printim 'outfile' PNG white x800 y800'
Figure 2 GrADS comparison graph example