This article refers to the address: http://

The increasingly gorgeous flat-panel TV OSD development takes up a lot of time for firmware engineers, and the structured OSD design can shorten development time and improve code quality. Based on the introduction of OSD implementation methods, OSD types, UI elements and definitions of OSD, this paper analyzes in detail the methods and ideas for implementing OSD development using structured OSD UI processing mechanism.

Figure 1: Character OSD.

With the increasing availability of flat-panel TVs with a variety of features, the increasingly gorgeous OSD interface design has taken up a lot of development time for firmware development engineers. Many firmware engineers continually repeat the same work: writing the same OSD text, graphics, and human-computer interaction interface (UI) interaction code for each model. In the more complex UI and OSD systems, the code size of this part is as high as 30-60%. At the same time, debugging the weak UI code will also take a lot of system debugging time.

The UI of the flat-panel TV mainly has input such as buttons and infrared remote controllers built on the machine, and outputs such as OSD and buzzer. The main function of the OSD is to provide an intuitive graphical interface to help users complete various control and information on the machine. Learn and other tasks. Figures 1, 2 present the appearance of the OSD that the user may see frequently. As the system's processing power increases, the current OSD can even provide a variety of accessories such as built-in games, notepads and perpetual calendars. This article focuses on the design of the OSD firmware and the UI controls associated with it, and attempts to provide a definition and solution for the UI in flat-panel TVs, reducing the time required for firmware engineers to construct the UI OSD interface. The concepts and schemes in this paper are equally applicable to other occasions with dot matrix display control tasks.

The main implementation methods and types of OSD

There are currently two major OSD implementation methods: superimposed synthesis between the external OSD generator and the video processor; the video processor internally supports the OSD and directly overlays the OSD information inside the video buffer.

The implementation principle of superposition and synthesis between the external OSD generator and the video processor is: using a built-in character generator and display buffer of an MCU, using a fast-blank signal to switch the screen of the TV and the content of the OSD display. The contents of the OSD characters and the like are superimposed on the final display screen, and during the OSD and display screen superimposition processing, the translucent effect of the OSD can be achieved by adjusting the ratio between the two. At the same time, re-encoding the red, green and blue signals in the OSD signal can obtain different OSD color effects.

Another implementation method is that the video processor internally supports the OSD and directly superimposes the OSD information inside the video buffer. This type of video processing usually has an external memory or a small amount of line buffers inside, and has an OSD generator. The synthesis and control of the OSD is done directly in the video buffer, as well as the translucent and color control functions described above.

The OSD has two types: Font-Based and Bit-Map.

Character OSD (Figure 1 is a character type): In order to save display cache, the early and low-cost solution uses a character-based OSD generator, the principle is to display the content in the OSD according to a specific format (12 × 18, 12 × 16)) to divide into blocks, such as the number 0-9, the letter az, the commonly used brightness, contrast symbol, etc., and solidify these contents in ROM or Flash, only store the corresponding index number in the display cache, such " The dictionary structure can greatly reduce the need for display caching.

At the same time, in order to provide control over attributes such as the color of each character, there is usually also an attribute cache of the same size as the display cache, whose attributes (foreground color, background color, flicker, etc.) are valid for each pixel in the entire character. To compensate for the shortcomings of not being able to assign a color to each pixel in this way, the designer of the OSD generator provides a scheme for rendering multi-color characters in a manner that combines multiple display buffers. The principle is that each display cache determines a color scheme, and when two or more display caches are merged, "multiple" characters of more than two colors can be "patched together".

          Figure 2: Bitmap OSD .

The advantage of the character OSD is that it can use less display cache inside the OSD, and the MCU only needs to specify the index of the display content to display the corresponding OSD information, which can be implemented on the relatively low speed MCU. However, it is precisely because the above display information and color coding methods are not intuitive enough, which will bring some trouble to the firmware development of the character OSD. This type of OSD is commonly used on liquid crystal displays, low-cost flat-panel TVs, and CRT traditional TVs, and still dominates the market.

Compared with the character OSD, the processing principle of the bitmap OSD (Fig. 2 belongs to the bitmap) is more intuitive and simple: directly superimposing the OSD information to the final display by changing each pixel of a specific area on the final display content. On the screen, its pixel-by-pixel control ensures multi-color and sufficient performance. Bitmap OSD generators are typically built into the video processor and share their primary display cache. There are also professional OSD bitmap generators that are independent of the video processor, such as Maxim's MAX4455, which typically requires external SDRAM as the display buffer.

The display effect of the bitmap OSD can theoretically be very perfect, and can provide various objects like the three-dimensionality in Windows, such as buttons with shadows, rich graphics and text, etc., the disadvantage is that there must be enough The OSD displays the cache and the speed requirements imposed on the MCU by processing it in pixels. This type of OSD is typically used on large high-end flat-panel TVs and professional displays. As technology continues to evolve and memory costs continue to decline, future OSDs should be bitmap-type.

UI basic elements and definitions of OSD

The purpose of displaying an OSD is to express information to the user. So what information needs to be expressed? It usually includes prompts, warning messages, numerical displays of control parameters, and more. Although its nature is a combination of characters or pixels, regardless of its display shape, the classification and attribute definition of this information helps the firmware developer's uniform encoding and code processing. This article attempts to classify, analyze these elements and give a unified firmware handling method below.

1. Basic concept of OSD

UI language: refers to the language type used in the text part of the OSD content.
UI mode: refers to the environment in which the content of the OSD is applicable, such as the mode change brought by different signal sources (television, DVD, PC), and its function mainly distinguishes the different performances of the OSD in different environments.
UI scene: A specific page to which the current OSD applies in a specific language mode and in the case of more information pages.
UI event: An operation command provided by the user to the UI system by the input device.
UI action table: refers to an index table for corresponding processing of commands input by the UI in a specific UI scenario.
OSD canvas: refers to the area that the entire OSD renders, usually a rectangular area.
OSD position: Usually refers to the relative position of the origin in the OSD canvas compared to the upper left corner.
OSD object: A combination of pixels that are rendered on a canvas, expressing specific information, with specific properties.

2. The basic elements of the OSD

The OSD information mainly includes the following basic elements (may not be accurate in this article, I hope the reader can understand the meaning): area, label, icon, text, progress bar, animation, numbers, optional icons, navigation information. The definitions, roles, properties, and response events for these elements are given below.

a. Area Definition: A rectangular or arbitrarily shaped area marked with a specific attribute (color, flicker, size, etc.) in the OSD canvas.
Role: Classification or labeling of OSD content, such as title area, content area, etc.
Attributes: position, color, flicker characteristics, etc.
Response event: As a fixed information content, it usually does not respond to the control of the UI input.

b. Label
Definition: A fixed text message, which can be one or more lines.
Role: Make the necessary text description of the OSD content.

Figure 3: Character OSD structure.


  

Attributes: position, color, flicker characteristics, language category, capitalization, alignment, etc.
Response event: As a fixed information content, it usually does not respond to the control of the UI input.

c. Icon (Icon)
Definition: A shape is formed by a specific character or combination of pixels to express identifiable information.
Role: to make an image of the OSD content, such as playing, prohibiting and other specific symbols.
Attributes: position, color, flicker characteristics, etc.
Response event: As a fixed information content, it usually does not respond to the control of the UI input.

d. Text (Text)
Definition: Compared to tags, they are also textual information, but can change with the user's operation.
Role: Provide text prompts about user selection with text content that changes with selection.
Attributes: location, color, language category, capitalization, alignment, etc.
Response event: The user's choice, usually the previous or next choice.

e. Progress bar (Bar)
Definition: Rectangular strips of objects change their properties depending on their values. In the future, there may be other shapes of such objects, such as oil gauges, but they all have the same properties.
Function: Give a graphical description of a certain value with a graphical interface.
Attributes: position, color, upper and lower limits, current value, type, size, whether to display values, etc.
Response event: A change in value.

f. Animation (Movie)
Definition: A combination of icons that change over time.
Function: Make the OSD interface more vivid with the active graphics and improve the expression of information.
Attributes: location, color, number of icons, speed of change, etc.
Response event: As a fixed information content, it usually does not respond to the control of the UI input.

g. Number Definition: A combination of numbers that changes with the relevant parameters or user selection changes, either in decimal or other hexadecimal, or as a percentage or other numerical form.
Role: Visually give numerical indications about a parameter, usually in conjunction with a progress bar to achieve a dual effect of intuitiveness and image.
Attributes: position, color, upper and lower limits, current value, hexadecimal selection, etc.
Response event: A change in the value of the corresponding parameter.

h. Optional icon (Option)
Definition: A combination of icons that changes as the parameters or user selection changes.
Role: Graphical representation of user-selected representations, such as selection, unselection, opening, closing, etc.
Attributes: position, color, flicker, number of choices, etc.
Response event: The selection of the corresponding parameter changes.

i. Navigation Information Definition: Information presented on the OSD canvas to prompt for user actions in the current UI scene.
Function: Direct the user to operate the relevant buttons to perform OSD content operations. Usually there is an indication of the available buttons and the necessary text descriptions, usually as a measure of OSD prompt information and human-machine interface friendliness.
Attributes: position, color, blinking, etc.
Response event: UI scene, button change.

It should be noted that the above objects cannot cover all the possible contents of the current and future OSDs, but they are the basic and main contents of the OSD. By classifying them and processing them in a unified manner, we can help us complete 80-90% of the work in the usual sense of OSD.

Handling OSD UI using object-based methods

The traditional processing method is to "draw" the OSD objects in a specific scene one by one by code. When a specific UI event is encountered, a bunch of if else is used to determine a specific scene and an operation object, and corresponding OSD processing is performed. In the case of a simple OSD, it is a viable method. However, in the case of more OSD scenes and patterns, the structure of this if else will become very large, and more importantly, it is extremely error-prone and the maintenance cost is increased.

As OSDs become more complex and the workload of code continues to increase, people realize that we need to spend too much time on these “surface articles”, and the development time of the really important application layer and device driver layer will be affected. Affect the development progress of new products. Firmware engineers are also reluctant to repeatedly write the same code to meet the changing needs of a particular OSD.

I have encountered the same problems in the early days. In the face of the inefficiency of the engineers in the department, I feel the importance of developing a unified OSD UI platform. The analysis of the above OSD UI now allows us to develop a stand-alone development tool that is independent of the hardware environment of a particular digital video processor platform and OSD generation mechanism.

In fact, important providers of flat panel display chip solutions such as Genesis, Pixelworks, etc. have provided Windows-based firmware development tools with such features in order to accelerate the development and application speed of their products. This article attempts to explore the working principle of this type of tool, perhaps readers can develop the tools they need based on this article, of course, its application has a broader representation.

The author used such a structure in the recent LCD TV development case:

Typedef struct
{
Byte mode; / / UI scene applicable mode
Byte lan; // UI language
Byte scene; // UI scene
Byte last; // UI last scene
Byte next; // UI next scene
Byte sel; //UI Current scene selection of objects
Byte sel_total; //The total number of choices in the current scene of the UI
Byte *info; // UI object pointer
Byte pos_v; // object vertical position
Byte pos_h; // the position of the object in the horizontal direction
Byte col_f; // the foreground color of the object
Byte col_b; // the background color of the object
Byte att; // other display properties of the object
ACT_Struct (*act)[]; // response action table pointer for the object
Byte *note; // navigation instructions
}UI_Struct;

Figure 4: GUI Builder OSD for Pixelworks
UI development tool interface.

Such a structure is to describe the basic properties of an OSD object and to specify its corresponding behavior for the action. By using such a structure to clearly describe each object in the scene, the OSD content of a specific UI scene can be determined, and at the same time, all UI characteristics such as the previous scene, the next scene, and the motion response characteristics are determined. . Such information constitutes an array that is translated and described by a unified "interpretation platform" to complete the entire UI construction.

This is somewhat similar to the interpreted language, and all we need to do is write these "scripts", and the OSD "draw" of the object is done by the "interpretation" platform to call the driver code of the external OSD generator. When it is necessary to change the OSD generator or based on different flat display controller platforms, only a small amount of OSD part of the driver code needs to be updated, thereby achieving "platform independence" of the UI system.

We need to construct the data structure of the related object in order to "interpret" the platform to identify the object type and draw it correctly. For example, the following structure completes the description of a language option (text object):

Void UI_ChangeLan()
{
UI_Lan=VAL_Lan;
ReDraw();
}
Code byte *STR_LAN_CHN[]=
{
"Chinese",
"English",
"French",
"Spanish",
};
Code word TXT_LAN_CHN[]=
{
//The total number of optional items in the variable corresponding to the text resource corresponding to the mark of the text object is the execution action when the object is changed.
RES_TXT, STR_LAN_CHN, VAL_LAN, sizeof(STR_LAN_CHN)/sizeof(byte *), UI_ChangeLan
};

The first data RES_TXT indicates to the "interpretation" platform that the object is text and has a data structure for the text. According to this, the "interpretation" platform reads the subsequent data according to the pre-agreed structure. The second data indicates that the source of the text content is STR_LAN_CHN, and the third data indicates which variable needs to be used to determine the first data in the text resource. And the fourth data shows how many texts the object has to choose from, and the last data specifies what needs to be done when the object changes. In this way, the "interpretation" platform gets enough information to "draw" such a language option, and can automatically execute the UI_ChangeLan() function when changes occur, helping the programmer to do what the language changes need to do.

In fact, all of these structures are completely customizable, as long as they are consistent with the "interpretation" platform.

With such an OSD driver structure, once the "interpretation" platform is built, the OSD developers need to make use of the various building blocks supported by the platform to place and stack the OSD graphics representation without having to rewrite the implementation. Code and care about driver code details associated with a particular hardware platform.

Further, even with the placement and design of these building blocks, we can design an intuitive Windows application to complete the generation of image-->character component generators, OSD graphical interface design, and final resource files and UI data arrays. And compile with the underlying "interpretation" platform to get the final MCU code.

Such an OSD interface development environment will be free from abstraction, boring and inefficiency, become intuitive and interesting, and even the client can design the relevant OSD interface without any programming experience and understanding of the underlying driver of the OSD.

It should be pointed out that compared with the traditional if else, the structured OSD UI processing mechanism will bring the final program volume increase and the running speed slower, but these shortcomings increase the internal program space of the MCU and support the clock frequency continuously. The situation of improvement is negligible. Therefore, if the case that the reader is facing is that the processing speed of the MCU and the program memory are limited, such a scheme may not be applicable. Taking the LCD TV project developed by the author as an example, the total program of the MCS51-based multitasking system is less than 32 KB, and the OSD based on Myson MTV230, while supporting all the functions of TV, graphics, music, games, and calendars. The +MCU processor runs very fast and does not experience any delay. The development environment that normally supports bitmap OSDs uses X86 or faster ARM processors and has more than 2MB of program memory.

Summary of this article

Object-oriented, structured programming is becoming more and more important as firmware development engineers face increasingly complex applications. The immediate benefits are increased programming efficiency and reduced maintenance costs, while robust to the program. Sex also helps. The superiority of the method provided in this paper has been verified in the actual development case, so that the same OSD interface can be completed, the author can shorten the time to 1/4 of the original, and improve the quality of the code.