Working Paper Article Version 2 This version is not peer-reviewed

Luminance Degradation Compensation Based on Multi-Stream Self-Attention to Address Thin Film Transistor-Organic Light Emitting Diode Burn-in

Version 1 : Received: 7 April 2021 / Approved: 8 April 2021 / Online: 8 April 2021 (11:05:13 CEST)
Version 2 : Received: 29 April 2021 / Approved: 29 April 2021 / Online: 29 April 2021 (09:10:06 CEST)

A peer-reviewed article of this Preprint also exists.

Park, S.; Park, K.-H.; Chang, J.-H. Luminance-Degradation Compensation Based on Multistream Self-Attention to Address Thin-Film Transistor-Organic Light Emitting Diode Burn-In. Sensors 2021, 21, 3182. Park, S.; Park, K.-H.; Chang, J.-H. Luminance-Degradation Compensation Based on Multistream Self-Attention to Address Thin-Film Transistor-Organic Light Emitting Diode Burn-In. Sensors 2021, 21, 3182.

Abstract

We propose a deep learning algorithm that directly compensates for luminance degradation owing to the deterioration of organic light emitting diode (OLED) devices to address the burn-in phenomenon of OLED displays. Conventional compensation circuits are encumbered by a high cost of development and manufacturing processes owing to their complexity. However, given that deep learning algorithms are typically mounted on a system on chip (SoC), the complexity of the circuit design is reduced, and the circuit can be reused by re-learning only the changed characteristics of the new pixel device. The proposed approach comprises deep feature generation and multi-stream self-attention, which decipher the importance of the variables, and the correlation between burn-in-related variables. It also utilizes a deep neural network that identifies the nonlinear relationship between the extracted features and luminance degradation. Thereafter, the luminance degradation is estimated from the burn-in-related variables, and the burn-in phenomenon can be addressed by compensating for the luminance degradation. The experimental results revealed that compensation was successfully achieved within an error range of 4.56%, and demonstrate the potential of a new approach that can mitigate the burn-in phenomenon by directly compensating for pixel-level luminance deviation.

Keywords

thin film transistor (TFT); organic light emitting diode (OLED); compensation circuit; luminance degradation; artificial intelligence; deep neural network; convolutional neural networks

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (1)

Comment 1
Received: 29 April 2021
Commenter: Joon-Hyuk Chang
Commenter's Conflict of Interests: Author
Comment: The contents have been revised according to the reviewer's comments.

[Reviewer 1]
Response 1:
According to the reviewer’s comment of Point 1, we modified the manuscript such that  “Eventually, as the usage time increases, the deterioration of the OLED device accelerates, and luminance degradation occurs [4, 5]. Indeed, Xia et al. [6] introduced that OLED luminance degradations is caused by intrinsic and/or extrinsic factors. Intrinsic factors are generally related to moisture or oxygen. Those can be the cause of delamination or oxidation of the electrodes. Extrinsic factors are related to the degradation of supply voltage-current and the change of ambient temperature during the whole lifetime of OLED displays [7, 8]. In addition, Kim et al. [9] showed that the characteristic of color and luminance degradation. They used an electroluminescence (EL) degradation model of R, G, and B pixel over stress time, and obtained that the blue pixel degrades faster than other pixels. They also described that the luminance degradation tends to decrease rapidly at the beginning of use, then become more gradual. Some several studies showed the power consumption for R, G, and B components of an OLED pixel by power models. The blue color consumes more power as compared to the red and green color components [10-12]. Ultimately, the burn-in phenomenon is a major cause of the deterioration of the image and video quality over time [13-16]. Therefore, research on pixel compensation technology that effectively addresses the burn-in phenomenon of OLED displays is important to continuously provide high-quality images and videos to users.” (p. 1~2, lines 36-51, in the introduction)
We removed those sentences from the Introduction and added more information about OLED displays behaviour. 

“In our experiments, we used the blue pixel data, which has the largest power consumption and a much faster degradation rate compared to red and green pixel data [33]. Therefore, a deep learning based compensation algorithm was trained and evaluated using 1.08 billion datasets of blue pixel generated using data simulators and data augmentation. The compositions of the datasets, divided into training data and test data, are shown in Table 3. Figure 7 shows respectively the power consumption and the luminance degradation rate for the blue, red and green pixel.” (p. 9, lines 236-242, in the Datasets)
We modified the contents and figure of the luminance degradation rate for more detail.Figure 7. Luminance degradation rate for the normalized blue, red and green pixel data. 

Response 2: According to the reviewer’s comment of Point 2, we modified the manuscript such that ⇒ “In general, increasing the amount of data improves the performance of deep learning models [26]. We also generate additional data via data augmentation; subsequently, we conduct training using these data together with the existing data. Furthermore, natural data in the real world has noise due to various conditions such as temperature, humidity, and initial tolerance. This means that it is necessary to reflect this noise and generate data similar to natural data as much as possible. The bootstrap method is an approach for increasing the training data using random sampling. Figure 2 shows a block diagram of the proposed data augmentation algorithm based on the bootstrap method. First, 60 million samples are drawn six times using random sampling from 720 million pixel data generated through a data simulator. The sample extracted in this method is called a bootstrap sample, and the mean and standard deviation of each bootstrap sample are calculated. Then, 60 million random number data are generated based on the calculated mean and standard deviation in order to obtain noise that follows the Gaussian statistical distribution of the bootstrap sample data. The generated random number data is multiplied by a constant weight of 0.01 to reduce information loss of the original data that may occur when noise is applied. Finally, 60 million bootstrap sample data is multiplied by each random number data to generate new data. Since this method generates noise through the distribution of each bootstrap sample, it can generate noise similar to the distribution of the original image. Consequently, by applying the 720 million pixel data generated in the data simulator, 360 million training data with independent characteristics are additionally generated for each R, G, and B color.” (p. 6, lines 154-174, in the Data Augmentation)

We modified the contents and figure of the bootstrap method used in data augmentation in more detail. Also, in the process of augmenting the data, the reason that only half of the original data was created is to prevent overfitting that can occur because the number of data is too large. 

Response 3: According to the reviewer’s comment of Point 3, we modified the manuscript such that
Figure 3 shows the structure of the proposed entire deep learning model, which is trained to estimate the deviation luminance ( ), which requires compensation.” (p. 6, lines 181-183)
The word ‘proposed’ was added to indicate that it is a proposed model. 

“Overview of the proposed deep feature generation model.” (The caption of Figure 4)
The word ‘proposed’ was added to indicate that it is a proposed model.The algorithm used in Figure 4 is a combination of 1D convolutional neural network, deep neural network, and rectified linear units, which are generally used for feature extraction, and was designed over several experiments. Therefore, the number of layers or units used in the network is appropriately designed according to the domain of each data.

“Multi-stream self-attention [24] has already been applied to the field of speech recognition to show state-of-the-art. Based on the idea of this algorithm, we propose a modified multi-stream self-attention that is optimized for learning the outputs of deep feature generation. The proposed multi-stream self-attention consists of two multi-head self-attention layers [25] each of which consists of five self-attention layers, as shown in Figure 5. The operation process of this algorithm proceeds in the following order. First, the multi-stream self-attention improves the performance of deep learning algorithms with ensemble-like effects. Second, the multi-head self-attention corresponding to each stream is trained by increasing the weight of the most important feature to effectively compensate for the degraded luminance among the input features. Similarly, less important features are trained such that the weight is reduced; that is, when five input data with dimensions (1, 11) are input to each head, an extraction process is performed that represents the importance of each feature by adjusting the weight value to focus on the most important of the 11 features. Third, given that the output of the multi-head self-attention maintains the dimension of the input data, the data with the (1, 55) dimension is output by concatenating five outputs of each head. Finally, the multi-stream self-attention outputs data with dimensions (1, 110) as two outputs.” (p. 8, lines 206-222, Figure 5)
We added a description of the field in which the idea based on the proposed algorithm was used. Also, we added details that the idea was transformed and used. 

“Overview of the proposed deep neural network model.” (The caption of Figure 6)
The DNN algorithm is widely used in various fields, and the composition of the algorithm varies depending on the data domain. In this study, we explored the number of layers to be most optimized for learning the data we have. In addition, since there are several trade-offs in applying other hyper-parameters such as Batch Normalization method, nonlinear function (ReLU and Leaky-ReLU), and drop-out, we designed the algorithm through several experiments. Therefore, the word ‘proposed’ was added to indicate that it is a proposed model.

[Reviewer 2]
Response 1: According to the reviewer’s comment of Point 2, we modified the manuscript such that
⇒ “Indeed, Xia et al. [6] introduced that OLED luminance degradations is caused by intrinsic and/or extrinsic factors. Intrinsic factors are generally related to moisture or oxygen. Those can be the cause of delamination or oxidation of the electrodes. Extrinsic factors are related to the degradation of supply voltage-current and the change of ambient temperature during the whole lifetime of OLED displays [7, 8]. In addition, Kim et al. [9] showed that the characteristic of color and luminance degradation. They used an electroluminescence (EL) degradation model of R, G, and B pixel over stress time, and obtained that the blue pixel degrades faster than other pixels. They also described that the luminance degradation tends to decrease rapidly at the beginning of use, then become more gradual. Some several studies showed the power consumption for R, G, and B components of an OLED pixel by power models. The blue color consumes more power as compared to the red and green color components [10-12]. Ultimately, the burn-in phenomenon is a major cause of the deterioration of the image and video quality over time [13-16]. Therefore, research on pixel compensation technology that effectively addresses the burn-in phenomenon of OLED displays is important to continuously provide high-quality images and videos to users.” (p. 1~2, lines 36-51, in the introduction)

In our current experimental environment or data, it was difficult to measure power consumption, so we added more references of power consumption’s behaviour for R, G, and B in the introduction.  

Response 2: According to the reviewer’s comment of Point 3, we modified the manuscript such that 
⇒ Same as the contents modified in Point 2, we added more information or references about OLED displays behaviour (luminance and color degradation) over the time.  

“In our experiments, we used the blue pixel data, which has the largest power consumption and a much faster degradation rate compared to red and green pixel data [33]. Therefore, a deep learning based compensation algorithm was trained and evaluated using 1.08 billion datasets of blue pixel generated using data simulators and data augmentation. The compositions of the datasets, divided into training data and test data, are shown in Table 3. Figure 7 shows respectively the power consumption and the luminance degradation rate for the blue, red and green pixel.” (p. 9, lines 236-242, in the Datasets)
We modified the contents and figure of the luminance degradation rate for more detail.

[Reviewer 3]
Response 1: According to the reviewer’s comment of Point 1, we modified the manuscript such that
“We propose a deep learning algorithm that directly compensates for luminance degradation owing to the deterioration of organic light emitting diode (OLED) devices to address the burn-in phenomenon of OLED displays.” (p. 1, lines 1-2, in the abstract)

Response 2: According to the reviewer’s comment of Point 2, we rephrased the content as below.“However, the compensation circuit requires additional external sensing circuits, logic circuits, and external memory with a simple pixel structure. In particular, an analog-to-digital converter (ADC) is required for sensing, in addition to memory for the storage of the sensing data.” (p. 2, lines 65-68, in the introduction)

Response 3: We changed the input variables (five variables -> four variables), and then conducted our experiments again. 
Therefore, some contents, figures, and results were modified as below. 
“The data used in the deep learning model consists of four features consisting of vector forms with (1, 4) dimensions. (p. 6, lines 177-178, in the Data Configuration)
Figure 3, 4, 5, 6 and 8 were modified.
⇒ Table 4, 5, 6, and 7 were modified.
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.