xk-ISP code manuals



  • tpg.md

    parameters

    typedef struct{
        bool    m_bTPG_en;
        uint13  m_nWidth;
        uint13  m_nHeight;
        uint2   m_nCFAPattern;
        bool    m_bRollingEnable;
        bool    m_bSensor_timing_en;
        uint12  m_nVBlank_num;
        uint12  m_nHBlank_num;
        uint8   m_valid_blank; 
        uint3   m_nID;
    } tpg_register;
    
    1. m_bTPG_en

      tpg module enable

    2. m_nWidth

      Parameter to be added

    3. m_nHeight

      Parameters to be added

    4. m_nCFAPattern

      CFA pattern ({0, 1, 2, 3} map to {rggb, grgb, bggr})

    5. m_bRollingEnable

      Parameters to be added

    6. m_bSensor_timeing_en

      Parameters to be added

    7. m_nVBlank_num

      Parameters to be added

    8. m_nHBlank_num

      Parameters to be added

    9. m_valid_blank

      Parameters to be added

    10. m_nID

      Initial colour block ID

    code manual

    algorithm flow

        tpg_row: for(uint13 i = 0; i < top_reg.frameHeight; i++){
            cnt = 0;
            BlockID = 0;
            tpg_col: for(uint13 j = 0; j < top_reg.frameWidth; j++){
                if(tpg_reg.m_bTPG_en == 1) {
                    if (cnt == BlockWidth){
                        cnt = 1;
                        BlockID++;
                    } else {
                        cnt++;
                    }
    
                    if(BlockID > 7){
                        BlockID = 7;
                    }
                    dst_t =src.read();
    
                    channel = (((i & 1) << 1) + (j & 1))^top_reg.imgPattern;
                    dst_t   = ColorSelect(channel, BlockID, tpg_reg);
                    dst.write(dst_t);
                } else {
                    dst_t = src.read();
                    dst.write(dst_t);
                    #ifdef DEBUG
                    if((i == ROW_TEST) && (j == COL_TEST)) {
                        printf("\t tpg_in = %x\n",dst_t.to_int());
                    }
                    #endif
                }
            }
        }
    

    in tpg function, for the same col, there are same pixels in the output test image. for the same row, if the cnt == BlockWidth, cnt return to 1, and BlockID++, Different BlockID correspond to different colours, and the BlockWidthis the width of one color. channel is the bayer channel, for example, if the top_reg.imgPattern = 0, so, channel = 0, channel is r channel; channel = 1, channel is gr channel; channel = 10, channel is gb channel; channel = 11, channel is b channel. The output is then based on the ColorSelect function

    for the ColorSelect function, it use these code to update the initial color block.

    if(tpg_reg.m_bRollingEnable){
       RealID += tpg_reg.m_nID;
       if (RealID > 7){
           RealID -= 8;
       }
    }
    

    Then fill in the colours according to the realID, The correspondence is as follows:

    realID:
        0 -> white;
        1 -> black;
        2 -> red;
        3 -> green;
        4 -> blue;
        5 -> cyan(B+G);
        6 -> magenta(R+B);
        7 -> yellow(R+G);
    


  • Edge Enhancement

    Edge enhancement is an image processing filter that enhances the edge contrast of an image or video in an attempt to improve its acutance (apparent sharpness).

    ee.h

    EE.h header file contains some basic information of ee module.<br/><br/>
    The relevant parameters of the ee module are stored in the ee_register struct. The eb parameter controls the switch of the module, and the coeff parameter controls the intensity of sharpening.

    typedef struct{
        bool eb;
        uint8 coeff;
    }ee_register;
    

    The RGB value of each pixel is stored in the ee_pixel_t struct.

    typedef struct{
       uint12 r;
       uint12 g;
       uint12 b;
    }ee_pixel_t;
    

    The edgeenhancement function is used as the main function of the ee module, which will call the eeprocess function to perform edge enhancement operations. And we will talk about it later.

    void edgeenhancement(top_register&, ee_register&, stream_u36&, stream_u36&);
    uint36 eeprocess(uint36[5][5], ee_register&);
    

    ee.cpp

    edgeenhancement

    As mentioned above, edgeenhancement function serves as the main function to perform the sharpening operation of the entire image and will be translated by the HLS compiler into the RTL description of the ee module.<br/><br/>

    EE module contains two storage unit ee_block and ee_line.
    ee_block will be converted into registers for convolution operations, and ee_line will be translated into sram to improve throughput.

        uint36 ee_block[5][5];
        uint36 ee_lines[4][4096];
    

    The following pragma instructs the compiler to translate the above two arrays into the corresponding memory.

    set_directive_array_partition -type block -factor 5 -dim 1 "edgeenhancement" ee_block
    set_directive_array_partition -type block -factor 4 -dim 1 "edgeenhancement" ee_lines
    

    single_loop traverses each pixel to process one pixel in one cycle.

    The following code implements the transfer of data between storage units.

    block_refresh_loop_out:for(k = 0;k < 5;k++){
                block_refresh_loop_in:for(l = 0;l < 4;l++){
                    ee_block[k][l] = ee_block[k][l+1];
                }
            }
    
            for(k = 0;k < 4;k++){
                ee_block[k][4] = ee_lines[k][j];
            }
            ee_block[4][4] = src_in;
    
            for(k = 0;k < 4;k++){
                ee_lines[k][j] = ee_block[k+1][4];
            }
    

    When eb is 0, the input pixels are output directly without any processing.
    Only when eb is 1, the module will perform the sharpening operation on the input data,that is, call the ee_process function.

    if(ee_top.eb)
            {
                if((i > 3) && (j > 3))
                {
                    dst_out = eeprocess(ee_block, ee_top);
                }
                else
                {
                    dst_out = ee_block[2][2];
                }
    
                if((i > 2) || ((i == 2) && (j >= 2)))
                {
                    dst.write(dst_out);
                }
    

    padding_loop handles the case of pixels at the border of the image, which will be output directly from ee_line.

           padding_loop_1:for(k = 0;k < 2;k++){
               dst_out = ee_lines[1][isp_top.frameWidth - 2 + k];
               dst.write(dst_out);
           }
    
           padding_loop_2:for(k = 0;k < 2;k++){
               padding_loop_3:for(i = 0;i < isp_top.frameWidth;i++){
                   dst.write(ee_lines[k+2][i]);
               }
    

    ee_process

    The function ee_process implements the unsharp masking algorithm.

    First, the low-frequency part of the image low_feq_r,low_feq_g,low_feq_b is obtained by performing low-pass filtering twice on the image.<br/>
    The filtering process is represented by the following two loops.Gaussian filtering comes first.

    for(k = 0;k < 5 ;k++)
        {
            for(l = 0;l < 5 ;l++)
            {
                rblock[k][l] = (int13)(ee_block[k][l] >> 24);
                gblock[k][l] = (int13)((ee_block[k][l] >> 12) & 0xfff);
                bblock[k][l] = (int13)(ee_block[k][l] & 0xfff);
                sharpen_threhold_r += guass_55[k][l] * rblock[k][l];
                sharpen_threhold_g += guass_55[k][l] * gblock[k][l];
                sharpen_threhold_b += guass_55[k][l] * bblock[k][l];
            }
        }
    //guass_55[5][5] = {1,2,4,2,1,2,4,8,4,2,4,8,16,8,4,2,4,8,4,2,1,2,4,2,1};
    

    The second one is average filtering.

     for(k = 0; k < 5; k++) {
            for (l = 0; l < 4; l++) {
    
                feq_l[l] = rblock[l][k]/2 + rblock[l + 1][k]/2;
                feq_h[l] = rblock[l][k]/2 - rblock[l + 1][k]/2;
    
                if(feq_h[l] > sharpen_threhold_r)
                {
                    feq_h[l] -= sharpen_threhold_r;
                }
                else if(feq_h[l] < - sharpen_threhold_r)
                {
                    feq_h[l] += sharpen_threhold_r;
                }
                else
                {
                    feq_h[l] = 0;
                }
            }
    
            temp_r = feq_l[1]/2 + feq_h[1]/2 + feq_l[2]/2 + feq_h[2]/2;
            result_v[k] = ee_clip(temp_r);
    

    The second step is to subtract the low-frequency image from the original image to obtain high-frequency details.<br/>
    Finally,The high-frequency part is multiplied by the set coeff parameter, and then superimposed on the original image to achieve image sharpening.

        high_feq_r = (int15)rblock[2][2] - low_feq_r;
        r_middle = high_feq_r * ee_top.coeff + 8;
        r_result = (r_middle >> 4) + low_feq_r;
        //red channnel as an example
    


  • YUV Denoise

    Algorithm

    Our YUV denoise module performs in YUV space.we choose the Non-local mean as our denosing algorithm in our hls implementation since it’s a hardware-friendly algorithm and can be realized by using several line buffers to support for the window serach operation required by each pixel.

    The NLM algorithm<sup>1</sup> is a simple but effective denosing method. Given a discrete noisy image $v = {v(i) | i ∈ I}$, the estimated value $NLv$, for a pixel i, is computed as a weighted average of all the pixels in the image,
    $$ NLv=\sum_{j \in L}\omega(i,j)v(j) $$

    the weights $\omega(i,j)$ are defined as:

    $$\omega(i,j)=\frac{1}{Z(i)}e^{-\frac{\Vert v(N_i)-v(N_j)\Vert^2,a}{h^2}}$$

    $v(N_i)$ is a vector formed by the grey values of the pixels around a squre neighborhood of fixed size centoered at pixel i .

    As shown in Fig 1, similar pixel neighborhoods will be given a large weight, $w(p,q1)$ and $w(p,q2)$, while much different neighborhoods give a small weight $w(p,q3)$ .

    HLS Implementation

    register parameters

        typedef struct{
        uint1 eb;//1 bit
        uint14 ysigma2;// sigma = [5,127]
        uint10 yinvsigma2;// invsigma = (1/sigma)<< 7
        uint14 uvsigma2;//[5,127]
        uint10 uvinvsigma2;//(1 / sigma) << 7
        uint4 yfilt;//[0.1~0.8] << 4
        uint4 uvfilt;//[0.1~0.8] << 4
        uint5 yinvfilt;//(1 / filt) << 1
        uint5 uvinvfilt;//(1 / filt) << 1
        uint14 yH2;
        uint18 yinvH2;
        uint14 uvH2;
        uint18 uvinvH2;
        }yuvdns_register;
    

    The explanation of parameters in yuvdns_register are listed in the following table

    parameters explanation
    eb controls the yuvdns module is enable or not
    ysigma2 controls the squared deviation of the gaussian kernel used for distance measure for in Y plane
    uvsigma2 controls the squared deviation of the gaussian kernel used for distance measure for in UV plane
    yH2 acts as a degree of filtering in Y plane
    uvH2 acts as a degree of filtering in UV plane

    functions

    yuv444dns

    
    void yuv444dns(top_register top_reg, yuvdns_register yuvdns_reg, stream_u10 &src_y, stream_u10 &src_u, stream_u10 &src_v, stream_u10 &dst_y, stream_u10 &dst_u, stream_u10 &dst_v)
    

    input/output description

    Params description
    top_reg global configure register
    yuvdns_reg a configure register used only by yuvdns moudule
    src_y Y component of source image streams
    src_u U component of source image streams
    src_v V component of source image streams
    dst_y Y compoent of denoised image streams
    dst_u U compoent of denoised image streams
    dst_v V compoent of denoised image streams

    Return value

    None

    function description

    yuv444dns is the top denosing function,it performs NLM denoise for incoming src_ysrc_usrc_v pixel streams and outputs denosied pixel streams dst_ydst_udst_v seperately and simutaneosly.

    in the original Non local mean algorithm, the denosied value of a pixel is computed as a weighted average of all the pixels in the image which has a high demand of line buffers to store neighbor pixels, thus we decrease the search area from the whole image to a 9x9 neigbor Window around the pixel.

    in yuv444dns function, 8 line buffers are defined to store previous line data, and a 9x9 2D array to store the searching window around the center pixel.

    uint10 yWindow[9][9];
    uint10 uWindow[9][9];
    uint10 vWindow[9][9];
    uint10 ylineBuf[8][4096];
    uint10 ulineBuf[8][4096];
    uint10 vlineBuf[8][4096];
    

    for each pixel prepared to be denosied, we get the 9x9 window formed by it’s arounding pixels

    the window pixel preparation could be diveded into the following 3 steps

    Step 1
    datas in the 9x9 window shift left 1 pixel width

    for(uint4 i = 0; i < 9; i++){
        for(uint4 j = 0; j < 8; j++){                           
            yWindow[i][j] = yWindow[i][j+1];
            uWindow[i][j] = uWindow[i][j+1];
            vWindow[i][j] = vWindow[i][j+1];
        }
    }
    
    
    

    Step 2

    The 9x9 Window reads 8x1 pixel data from the 8 linebuffers and the input stream pixel y_t, u_t ,v_t, then the 9x1 new pixels will be stored in the rightest column of the Window

    for(uint4 i = 0; i < 8; i++){
        yWindow[i][8] = ylineBuf[i][col];
        uWindow[i][8] = ulineBuf[i][col];
        vWindow[i][8] = vlineBuf[i][col];
    }
    yWindow[8][8] = y_t;
    uWindow[8][8] = u_t;
    vWindow[8][8] = v_t;
    

    Step 3

    Update the line buffer

    for(uint4 i = 0; i < 7; i++){
        ylineBuf[i][col] = ylineBuf[i+1][col];
        ulineBuf[i][col] = ulineBuf[i+1][col];
        vlineBuf[i][col] = vlineBuf[i+1][col];
        }
    ylineBuf[7][col] = y_t;
    ulineBuf[7][col] = u_t;     
    vlineBuf[7][col] = v_t;  
    

    When the above 3 steps are finished, a yuvdns_nlm function(will be discussed later) will be called to perform the block serach operation.

    Additionally, it's worth noting that for those border pixels which do not have enough neibor pixels to form a 9x9 search window, we directly ouput its’ origin value. finally every pixels of a frame will be denoised except the 4-pixel width border.

    yuvdns_nlm

    function declaration

    uint10 yuvdns_nlm(uint10 Window[9][9],uint14 sigma2, uint14 H2,uint18 invH2){
    uint8  weight_1[8]={255,226,200,176,156,138,122,108};
        uint8  weight_2[16]={88,72,59,48,39,32,26,21,17,14,11,9,7,5,2,0};
    
        uint21 diff;
        uint22 diff_1;
        uint22 diff_2;
        uint22 diff_3;
        uint28 diff_tmp; 
        uint8 weight = 0, maxweight = 0; 
        uint14 totalweight =0; 
        uint25 totalvalue = 0;
        nlm_row_loop:for (uint4 j = 1; j < 8; j++){
            nlm_col_loop:for (uint4 i = 1; i < 8; i++) {
                if (i != 4 || j != 4) {
                    uint10 dis_1 = 0;
                    uint20 dis_11 = 0;
                    dis_1 = yuvdns_abs(Window[j-1][i-1],Window[3][3]);
                    dis_11 =  dis_1 * dis_1;
    
                    uint10 dis_2 = 0;
                    uint20 dis_22 = 0;
                    dis_2 = yuvdns_abs(Window[j-1][i],Window[3][4]);
                    dis_22 =  dis_2 * dis_2;
    
                    uint10 dis_3 = 0;
                    uint20 dis_33 = 0;
                    dis_3 = yuvdns_abs(Window[j-1][i+1],Window[3][5]);
                    dis_33 = dis_3 * dis_3;
    
                    uint10 dis_4 = 0;
                    uint20 dis_44 = 0;
                    dis_4 = yuvdns_abs(Window[j][i-1],Window[4][3]);
                    dis_44 = dis_4 * dis_4;
    
                    uint10 dis_5 = 0;
                    uint20 dis_55 = 0;
                    dis_5 = yuvdns_abs(Window[j][i],Window[4][4]);
                    dis_55 = dis_5 * dis_5;
    
                    uint10 dis_6 = 0;
                    uint20 dis_66 = 0;
                    dis_6 = yuvdns_abs(Window[j][i+1],Window[4][5]);
                    dis_66 = dis_6 * dis_6;
    
                    uint10 dis_7 = 0;
                    uint20 dis_77 = 0;
                    dis_7 = yuvdns_abs(Window[j+1][i-1],Window[5][3]);
                    dis_77 = dis_7 * dis_7;
    
                    uint10 dis_8 = 0;
                    uint20 dis_88 = 0;
                    dis_8 = yuvdns_abs(Window[j+1][i],Window[5][4]);
                    dis_88 = dis_8 * dis_8;
    
                    uint10 dis_9 = 0;
                    uint20 dis_99 = 0;
                    dis_9 = yuvdns_abs(Window[j+1][i+1],Window[5][5]);
                    dis_99 = dis_9 * dis_9;
    
                    diff_1 = dis_11 + dis_22 + dis_33;
                    diff_2 = dis_44 + dis_55 + dis_66;
                    diff_3 = dis_77 + dis_88 + dis_99;
    
                    diff = (diff_1 + diff_2 + diff_3) >> 3;
    
                    if(diff < 2 * sigma2){
                    	diff = 0;
                    }else{
                    	diff = diff - 2 * sigma2;
                    }
    
                    uint32 count = 0;
                    if(H2 == 0){
                        weight = 0;
                    }                   
                    else if(diff <= H2){
                        diff_tmp = 7 * diff;
                        count = (diff_tmp * invH2)>>14;
                        weight = weight_1[count];
                    }
                    else{
                        diff_tmp = 5 * diff;
                        count = (diff_tmp * invH2)>>14;
                        count = yuvdns_weight2_clip((count -5));
                        weight = weight_2[count];
                    }
    
                    if(weight > maxweight){
                        maxweight = weight;
                    }
                    totalweight = totalweight + weight;
                    totalvalue = totalvalue + Window[j][i] * weight;
               }
            }
        }
        totalweight =  totalweight + maxweight;
        totalvalue = totalvalue + Window[4][4] * maxweight;
    
        if(totalweight==0)
            return Window[4][4];
        else
            return yuvdns_clip(totalvalue / totalweight, YUVDNS_Max_Value, 0);
    }
    

    parameters description

    Params description
    Window[9][9] search area of the pixel to be denoised
    sigma2 squared deviation of gaussian kernel for distance measure
    H2 $h^2$
    invH2 $\frac{1}{h^2}$

    Return

    the denoised value of the center pixel in Window[9][9]

    function description

    yuvdns_nlm function performs the nlm search in the 9x9 Window which calculates the similarities between each neighbor pixel and the center pixel. Finally, yuvdns_nlm returns the denosied pixel value.

    Reference

    [1]Buades, A. , B. Coll , and J. M. Morel . "A non-local algorithm for image denoising." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on IEEE, 2005.



  • dgain

    parameter

    typedef struct{
        bool    m_nEb;
        uint9   m_nBlcR;
        uint9   m_nBlcGr;
        uint9   m_nBlcGb;
        uint9   m_nBlcB;
        uint20  m_nR;
        uint20  m_nGr;
        uint20  m_nGb;
        uint20  m_nB;
    } dgain_register;
    
    1. m_nEb

      dgain module enable

    2. m_nBlcR, m_nBlcGr, m_nBlcGb, m_nBlcB

      R, Gr, Gb, B channel black level correction values

    3. m_nR, m_nGr, m_nGb, m_nB
      Bayer array R, Gr, Gb,B channel gain values

    dgain code manual

    src_t = src.read get the input values of input pixels. if the m_nEb == 1, calculate the bayerPattern, for example, if the top_reg.imgPattern = 0, so, bayerPattern = 0, channel is r channel; bayerPattern = 1, channel is gr channel; bayerPattern = 10, channel is gb channel; bayerPattern = 11, channel is b channel. Choose the gain_w channel and blc_w channel according to the bayerPattern, the code is shown below:

    if (bayerPattern == 0) {
        blc_w = dgain_reg.m_nBlcR;
        gain_w = dgain_reg.m_nR;
    } else if (bayerPattern == 1) {
        blc_w = dgain_reg.m_nBlcGr;
        gain_w = dgain_reg.m_nGr;
    } else if (bayerPattern == 2) {
        blc_w = dgain_reg.m_nBlcGb;
        gain_w = dgain_reg.m_nGb;
    } else {
        blc_w = dgain_reg.m_nBlcB;
        gain_w = dgain_reg.m_nB;
    }
    

    Finally the output pixel values are calculated from the input values, gain values and black level correction values.
    the code is shown below:

    dst_tmp = (src_t - blc_w) * gain_w + GAIN_HALF_VALUE;
    dst_val = (dst_tmp >> GAIN_BITS) + top_reg.blc;
    dst_t = dgain_clip(dst_val,0,4095);
    

    if m_nEb = 0, directly output of input pixel values



  • Demosaicking

    In order to reflect the real color information of the real world, a single pixel needs to contain RGB tri-chromatic component information at the same time, and the obtained image is called true color image. The process of recovering the missing chromaticity information of each pixel is called Color Interpolation, also known as Demosaic.

    Parameter

    typedef struct {
        bool        eb; // 1 bit
    } 
    

    Input & Ouput

    typedef stream_u12 Stream_t;//input raw data
    typedef stream_u36 Stream_3t;//output rgb data
    

    Resources

    The algorithm uses a 5x5 window to complete the interpolation process, and the storage resources are listed as follows

    rawData_t rawWindow[5][5];//5x5 registers
    rawData_t lineBuf[4][4096];//4x4096 sram
    

    Algorithm

    For different arrangements of Bayer arrays(determined by bayerPattern), the demosaicing algorithms are different.</br>The basic idea is that the channel corresponding to the center pixel of the window remains unchanged, and the other channels are interpolated according to the channels of the surrounding pixels.

    if (bayerPattern == 0)
        {
            pixTmp.r = rawWindow[2][2];
            pixTmp.g = 4 * rawWindow[2][2] - rawWindow[0][2] - rawWindow[2][0] - rawWindow[4][2] - rawWindow[2][4] \
                       + 2 * (rawWindow[3][2] + rawWindow[2][3] + rawWindow[1][2] + rawWindow[2][1]);
            pixTmp.b = 6 * rawWindow[2][2] - 3 * (rawWindow[0][2] + rawWindow[2][0] + rawWindow[4][2] + rawWindow[2][4]) / 2 \
                       + 2 * (rawWindow[1][1] + rawWindow[1][3] + rawWindow[3][1] + rawWindow[3][3]);
            pixTmp.g = pixTmp.g / 8;
            pixTmp.b = pixTmp.b / 8; 
        }
    else if (bayerPattern == 1)
        {
            pixTmp.r = 5 * rawWindow[2][2] - rawWindow[2][0] - rawWindow[1][1] - rawWindow[3][1] - rawWindow[1][3] - rawWindow[3][3] - rawWindow[2][4] \
                       + (rawWindow[0][2] + rawWindow[4][2]) / 2 + 4 * (rawWindow[2][1] + rawWindow[2][3]);
            pixTmp.g = rawWindow[2][2];
            pixTmp.b = 5 * rawWindow[2][2] - rawWindow[0][2] - rawWindow[1][1] - rawWindow[1][3] - rawWindow[4][2] - rawWindow[3][1] - rawWindow[3][3] \
                       + (rawWindow[2][0] + rawWindow[2][4]) / 2 + 4 * (rawWindow[1][2] + rawWindow[3][2]);
            pixTmp.r = pixTmp.r / 8;
            pixTmp.b = pixTmp.b / 8;
        }
    else if (bayerPattern == 2)
        {
            pixTmp.r = 5 * rawWindow[2][2] - rawWindow[0][2] - rawWindow[1][1] - rawWindow[1][3] - rawWindow[4][2] - rawWindow[3][1] - rawWindow[3][3] \
                       + (rawWindow[2][0] + rawWindow[2][4]) / 2 + 4 * (rawWindow[1][2] + rawWindow[3][2]);
            pixTmp.g = rawWindow[2][2];
            pixTmp.b = 5 * rawWindow[2][2] - rawWindow[2][0] - rawWindow[1][1] - rawWindow[3][1] - rawWindow[1][3] - rawWindow[3][3] - rawWindow[2][4] \
                       + (rawWindow[0][2] + rawWindow[4][2]) / 2 + 4 * (rawWindow[2][1] + rawWindow[2][3]);
            pixTmp.r = pixTmp.r / 8;
            pixTmp.b = pixTmp.b / 8;
        }
    else
        {
            pixTmp.r = 6 * rawWindow[2][2] - 3 * (rawWindow[0][2] + rawWindow[2][0] + rawWindow[4][2] + rawWindow[2][4]) / 2 \
                       + 2 * (rawWindow[1][1] + rawWindow[1][3] + rawWindow[3][1] + rawWindow[3][3]);
            pixTmp.g = 4 * rawWindow[2][2] - rawWindow[0][2] - rawWindow[2][0] - rawWindow[4][2] - rawWindow[2][4] \
                       + 2 * (rawWindow[3][2] + rawWindow[2][3] + rawWindow[1][2] + rawWindow[2][1]);
            pixTmp.b = rawWindow[2][2];
            pixTmp.r = pixTmp.r / 8;
            pixTmp.g = pixTmp.g / 8;
        }
    


  • dpc

    parameters

    typedef struct
    {
        bool     eb;
        uint11   th_w;
        uint11   th_b;
    }dpc_register;
    
    1. eb

    dpc module enable

    1. th_w

    hightlighted pixel threshold

    1. th_b

    dim pixel threshold

    code manual

    algorithm

    The pinto algorithm is used for processing. The pinto algorithm is based on the fact that a dead pixel is an extremely bright or dark pixel in its neighborhood, and the pixel value is the largest (smallest) value in the neighborhood. If the difference between the central pixel value and the surrounding pixel values is all positive or all negative, the point is a dead pixel which is replaced by the median value of the surrounding pixels; if the difference is both positive and negative, the point is considered normal and is not processed by the pinto algorithm. The templates of the R, G and B channels are selected separately for filtering.

    code

    dpc_loop_out:for(uint13 row = 0; row < top_reg.frameHeight; row++){
            dpc_loop_in:for(uint13 col = 0; col < top_reg.frameWidth; col++){
                srcData = src.read();
                if(dpc_reg.eb == 1){
                    rawWin_loop:for(uint3 i = 0; i < 5; i++){
                        rawWindow[i][0] = rawWindow[i][1];
                        rawWindow[i][1] = rawWindow[i][2];
                        rawWindow[i][2] = rawWindow[i][3];
                        rawWindow[i][3] = rawWindow[i][4];
                    }
    
                    rawWindow_read:for(uint3 i = 0; i < 4; i++){
                        rawWindow[i][4] = lineBuffer[i][col];
    
                    }
    
                    rawWindow[4][4] = srcData;
    
                    line_write:for(uint3 i = 0; i < 4; i++){
                        lineBuffer[i][col] = rawWindow[i+1][4];
    
                    }
    
                    if((row > 3)&&(col > 3)){
                        uint2 bayerPattern = (((row & 1) << 1) + (col & 1))^top_reg.imgPattern;
                        pixel = rawWindow[2][2];
                        
                        if(bayerPattern == 0 || bayerPattern == 3){
                            arr_ori[0] = rawWindow[0][0];
                            arr_ori[1] = rawWindow[0][2];
                            arr_ori[2] = rawWindow[0][4];
                            arr_ori[3] = rawWindow[2][0];
                            arr_ori[4] = rawWindow[2][4];
                            arr_ori[5] = rawWindow[4][0];
                            arr_ori[6] = rawWindow[4][2];
                            arr_ori[7] = rawWindow[4][4];
    
                            flag = defectPixelDetection(arr_ori, pixel, th_w, th_b);
                            if(flag == 1){
                                dstData = medianFilter(arr_ori, arr_sort);
                            }
                            else{
                                dstData = pixel;
                            }
                        }
                        else{
                            arr_ori[0] = rawWindow[0][2];
                            arr_ori[1] = rawWindow[1][1];
                            arr_ori[2] = rawWindow[1][3];
                            arr_ori[3] = rawWindow[2][0];
                            arr_ori[4] = rawWindow[2][4];
                            arr_ori[5] = rawWindow[3][1];
                            arr_ori[6] = rawWindow[3][3];
                            arr_ori[7] = rawWindow[4][2];
    
                            flag = defectPixelDetection(arr_ori, pixel, th_w, th_b);
                            if(flag == 1){
                                dstData = medianFilter(arr_ori, arr_sort);
                            }
                            else{
                                dstData = pixel;
                            }
                        }
                    }
                    else{
                        dstData = 0;
                    }
                }
                else
                {
                    dstData = srcData;
                }
                if((row > 2) || ((row==2) && (col > 1))){
                    
                    dst.write(dstData);
                }
            }
        }
    

    If dpc_reg.eb = 1 get the 5x5 windows to evaluate if the centre point is a dead pixel.

    rawWin_loop:for(uint3 i = 0; i < 5; i++){
      rawWindow[i][0] = rawWindow[i][1];
      rawWindow[i][1] = rawWindow[i][2];
      rawWindow[i][2] = rawWindow[i][3];
      rawWindow[i][3] = rawWindow[i][4];
    }
    
    rawWindow_read:for(uint3 i = 0; i < 4; i++){
      rawWindow[i][4] = lineBuffer[i][col];
    
    }
    
    rawWindow[4][4] = srcData;
    
    line_write:for(uint3 i = 0; i < 4; i++){
      lineBuffer[i][col] = rawWindow[i+1][4];
    
    }
    

    Dead pixel detection. Get the difference between center pixel and surrounding pixel, if the difference bigger than th_w, the center pixel is a hightlighted pixel, and if the difference smaller than th_b, the center pixel is a dim pixel. These two cases return 1, the center pixel is a dead pixel.

    bool defectPixelDetection(uint12 arr_ori[8], uint12 pixel, uint11 th_w, uint11 th_b){
        int12 th_1 = th_w;
        int12 th_2 = -th_b;
    
        int13 diff_0;
        int13 diff_1;
        int13 diff_2;
        int13 diff_3;
        int13 diff_4;
        int13 diff_5;
        int13 diff_6;
        int13 diff_7;
    
        bool rst;
    
        diff_0 = arr_ori[0] - pixel;
        diff_1 = arr_ori[1] - pixel;
        diff_2 = arr_ori[2] - pixel;
        diff_3 = arr_ori[3] - pixel;
        diff_4 = arr_ori[4] - pixel;
        diff_5 = arr_ori[5] - pixel;
        diff_6 = arr_ori[6] - pixel;
        diff_7 = arr_ori[7] - pixel;
    
        if (diff_0 < th_2 && diff_1 < th_2 && diff_2 < th_2 && diff_3 < th_2 && diff_4 < th_2 && diff_5 < th_2 && diff_6 < th_2 && diff_7 < th_2) {
            rst = 1; //defect pixel, black defect pixl
        }
        else if(diff_0 > th_1 && diff_1 > th_1 && diff_2 > th_1 && diff_3 > th_1 && diff_4 > th_1 && diff_5 > th_1 && diff_6 > th_1 && diff_7 > th_1)
        {
            rst = 1; //defect pixel, white defect pixel
        }
        else
        {
            rst = 0; //right pixel
        }
    
        return rst;
    }
    

    Median filter. The median filter first selects an i, does not sort for the time being for the corresponding value of j greater than i, makes a sort from smallest to largest for the corresponding value of j less than or equal to i, i traverses all the numbers, can get the original eight numbers from smallest to largest arrangement, finally takes the average of arr_sort[3] and arrsort[4] to get the median.

    uint12 medianFilter(uint12 arr_ori[8], uint12 arr_sort[8]){
        uint12 item;
        uint12 t; //tmp value
        uint13 result;
        MF_outer:for (int6 i = 0; i < 8; i++)
        {
            item = arr_ori[i];
    
            MF_inter:for (int6 j = 7; j >= 0; j--) {
                if (j > i) {
                    t = arr_sort[j];
                }
                else if (j > 0 && arr_sort[j - 1] > item)
                {
                    t = arr_sort[j - 1];
                }
                else
                {
                    t = item;
                    if (j > 0)
                        item = arr_sort[j - 1];
                }
                arr_sort[j] = t;
            }
    
        }
    
        result = (arr_sort[3] + arr_sort[4]) >> 1;
        return result;
    }
    

    Finally the dead pixels are replaced by the median value and the good pixels are left unprocessed to obtain the output



  • AWB

    parameters

    typedef struct {
        bool m_nEb;
        uint15 r_gain;
        uint15 g_gain;
        uint15 b_gain;
        uint5 coeff;
    }awb_register;
    
    1. m_nEb

    awb module enable

    1. r_gain, g_gain, b_gain

    the output of r,g,b gain values

    1. coeff

    mean coefficient

    code manual

    code

    calculate the pixel sum for the r, g, b channel according to the bayerpattern

     uint2  bayerPattern = (((y & 1) << 1) + (x & 1))^top_reg.imgPattern;
    
     if (bayerPattern == 0)
     {
         r_total = r_total + src_t;
     }
     else if (bayerPattern == 1)
     {
         g_total = g_total + src_t;
     }
     else if (bayerPattern == 2)
     {
         g_total = g_total + src_t;
     }
     else
     {
         b_total = b_total + src_t;
     }
    

    calculate the pixel average for the r, g, b channel

    r_avg = (r_total * awb_reg.coeff) >> 19;
    g_avg = (g_total * awb_reg.coeff) >> 20;
    b_avg = (b_total * awb_reg.coeff) >> 19;
    

    calculate the r_gain, g_gain, b_gain

    awb_reg.r_gain = 16384 * g_avg / r_avg;
    awb_reg.g_gain = 16384;
    awb_reg.b_gain = 16384 * g_avg / b_avg;
    


  • Raw Denoise

    The RAWDNS module implements image denoising on the raw domain.The the NLMs(Non-local Means) algorithm based on block matching is used for denoising.

    Parameter

    typedef struct{
        uint6   sigma;    //range[0-63]
        uint1   eb;       //{0,1}
        uint7   Filterpara; //range[0-127]
        uint12  invksigma2; //1/ksigma2 <<12
    }rawdns_register;
    

    Param sigma and Filterpara jointly control the intensity of the filtering.Param invksigma2 is the quantized reciprocal of the product of sigma and Filterpara。

    Input & Ouput

    typedef stream_u12 Stream_t;//input raw data
    typedef stream_u12 Stream_t;//output rgb data
    

    Resources

    The algorithm uses a 11x11 window to complete the denoising process, and the storage resources are listed as follows

    uint12 rawdns_lines[10][8192];//10x8192 sram
    uint12 rawdns_block[11][11];// 11x11 registers
    

    Algorithm

    Function rawdns_process describes the block matching based NLM algorithm, and the main calculation process is the following loop.

    for(k=1;k<=9;k+=2){
        for(l=1;l<=9;l+=2){
            if((k!=5) || (l!=5))
            {
                eur_distance = Cal_Eur_Distance(rawdns_block,k,l);
                diff = rawdns_max(eur_distance, sigma2);
                weight = Cal_weight(diff,rawdns_reg ,ksigma2);
                if(weight > max_weight)
                {
                    max_weight = weight;
                }
                total_weight += weight;
                total_value += weight * rawdns_block[k][l];
               }
            }
        }
    

    The Cal_Eur_Distance function calculates the Euclidean distance between different blocks, and cal_weight calculates the Gaussian weights of different pixels.Finally, according to the calculation result, the pixel value after denoising(usually center pixel of the window) is output.

    if(total_weight == 0)
        return rawdns_block[5][5];
    else
        return rawdns_clip(total_value/total_weight);