Efficient C C code for Eight-Bit MCUS

zhaozj2021-02-16  86

Efficient C C code for Eight-Bit MCUS

To program. This article. This article.......................

Getting the best possible performance out of an eight-bit microcontroller C compiler is not always easy. This article concentrates mainly on those microcontrollers that were never designed to support high-level languages, such as the 8051 family, the 6800 family (including the 68HCll), and the PIC line of microcontrollers. Newer eight-bit machines such as the Philips 8051XA and the Atmel Atmega series were designed explicitly to support HLLs, and as such, may not need all the techniques I describe here.

My emphasis is not on algorithm design, nor does it depend on a specific microprocessor or compiler. Rather, I describe general techniques that are widely applicable. In many cases, these techniques work on larger machines, although you may decide that the trade-offs Involved aren't Worthwhile.

Before jumping into the meat of the article, let's briefly digress with a discussion of the philosophy involved. The microcontrollers I mentioned are popular for reasons of size, price, power consumption, peripheral mix, and so on. Notice that "ease of programming" is conspicuously missing from this list. Traditionally, these microcontrollers have been programmed in assembly language. in the last few years, many vendors have recognized the desire of users to increase their productivity, and have introduced C compilers for these machines-many of which are extremely good. However, it's important to remember that no matter how good the compiler, the underlying hardware has severe limitations. Thus, to write efficient C for these targets, it's essential that we be aware of what the compiler can do easily and what requires Compiler Heroics. In Presenting these Techniques, I have Taken The Attitude That I wish to Solve A Problem by Programming A Microcontroller, and That The c compiler is a Tool, No Different from an OSCILLOSCOPE. IN Other Words, C Is A Means To An end, and not an end in itself. As a result, Many of My Comments Will Seem Heretical To The Purists Out There.ansi C

The first step to writing a realistic C program for an eight-bit machine is to dispense with the concept of writing 100% ANSI code. This concession is because I do not believe it's possible, or even desirable, to write 100% ANSI necessary Code for Any Embedded System, Particularly For Eight-Bit Systems. Some Characteristics of Eight-Bit Systems That Prevent Ansi Compliance Are:

? Embedded systems interact with hardware. ANSI C provides extremely crude tools for addressing registers at fixed memory locations. Consequently, most compiler vendors offer language extensions to overcome these limitations? All nontrivial systems use interrupts. ANSI C does not have a standard way of Coding Interrupt Service Routines

ANSI C HAS VARIOS TYPE PROMOTION RULES THAT ARE ABSOLUTE Performance Killers To An Eight-Bit Machine. Unless to Defeat The Ansi Promotion Rules

? Many microcontrollers have multiple memory spaces, which have to be specified in order to correctly address the desired variable. Thus, variable declarations tend to be considerably more complex than on the typical PC application

Many Microcontrollers Have No Hardware Support for a C Stack. Consequently, Some Compiler Vendors Dispense with a stack-based Architecture, In The Process Eliminating Several Key Features of C

This is not to say that I advocate junking the entire ANSI standard. Indeed, some of the essential requirements of the standard, such as function prototyping, are invaluable. Rather, I take the view that one should use standard C as much as possible. However, when it interferes with solving the problem at hand, do not hesitate to bypass it. does this interfere with making code portable and reusable? Absolutely. But portable, reusable code that does not get the job done is not much use.

I've also noticed that every compiler has a switch that strictly enforces ANSI C and disables all compiler extensions. I suspect that this is done purely so a vendor can claim ANSI compliance, even though this feature is practically useless. I have also observed that that vendors who strongly emphasize their ANSI compliance often produce inferior code (perhaps because the compiler has a generic front end that is shared among multiple targets) when compared to vendors that emphasize their performance and language extensions.Enough on the ANSI standard-let's address specific Actions That Can Be Taken To make your code run..

Data Types

Knowledge of the size of the underlying data types, together with careful data type selection, is essential for writing efficient code on eight-bit machines. Furthermore, understanding how the compiler handles expressions involving your data types can make a considerable difference in your coding decisions . The Following Paragraphs.

Data Type Size

In the embedded world, knowing the underlying representation of the various data types is usually essential. I have seen many discussions on this topic, none of which has been particularly satisfactory or portable. My preferred solution is to include a file, , An Excerpt from Which Appears Below:

#ifndef types_h

#define Types_H

#include

/ * Assign A Builtin Data Type to Boolean. This is compiler specific * /

#ifdef _c51_

Typedef bit boolean

#define false 0

#define True 1

#ELSE

Typedef enum {false = 0, true = 1} boolean;

#ENDIF

/ * Assign a Builtin Data Type to Type Char. This is an electric-bit signed variable * / # = 127)

Typedef char char;

#elif (Schar_max == 255)

/ * IMPLIES THAT BY Default Chars Are Unsigned * /

Typedef SIGNED CHAR CHAR;

#ELSE

/ * NO Eight bit data type * /

#Error Warning! Intrinsic Data Type Char is Not Eight Bits !!

#ENDIF

/ * REST of the file goes here * /

#ENDIF

The concept is quite simple. Types.h includes the ANSI-required file limits.h. It then explicitly tests each of the predefined data types for the smallest type that matches signed and unsigned one-, eight-, 16-, and 32- . bit variables The result is that my data type UCHAR is guaranteed to be an eight-bit unsigned variable, INT is guaranteed to be a 16-bit signed variable, and so forth In this manner, the following data types are defined:. BOOLEAN , CHAR, UCHAR, INT, UINT, LONG, AND ULONG. Several Points Are Worth Making:

? The definition of the BOOLEAN data type is difficult. Many eight-bit machines directly support single-bit data types, and I wish to take advantage of this if possible. Unfortunately, since ANSI is silent on this topic, it's necessary to use compiler -specific compilation

? Some Compilers Define a char as an unsigned quantity, such That IF a signed electric-bit var more is required, one haas to use the unusual declaration sign

Note The Use of the error Error if i can set a compile ire of having unambiguous definitions of boolean, uchar, char, uint, int, ulong, and long

IN All of the Following Examples, The Types Boolean, Uchar, And So On Will Be Used To Specify Unambiguously The size of the variable being used.

Data Type Selection

There Are Two Rules for Data Type Selection on Eight-Bit Machines :? Use the smallest Possible Type To Get The Job Done

? USE An Unsigned Type IF Possible

The reasons for this are simply that many eight-bit machines have no direct support for manipulating anything more complicated than an unsigned eight-bit variable. However, unlike large machines, eight-bitters often provide direct support for manipulation of bits. Thus, the Fastest Integer Types To Use ON AN Eight-Bit Machine Are Boolean and Uchar. Consider The Typical C C code:

INT is_positive (int A)

{

(a> = 0)? Return (1): return (0);

}

The Better Implementation IS:

Boolean is_positive (int A)

{

(a> = 0) Return (true): return (false);

}

On an eight-bit machine we can get a large performance boost by using the BOOLEAN return type because the compiler need only return a bit (typically via the carry flag), vs. a 16-bit value stored in registers. The code is also More readable.

Let's take a look at a second example. Consider the folowing code fragment That Is Littered THROUGHOUT MOST C Programs:

Int J;

For (j = 0; j <10; j )

{

:

}

This Fragment Product Product Way To Code This For Eight-Bit Machines IS FOLLOWS: 0000-00-00 00- 00 00 00 00

Uchar J;

For (j = 0; j <10; j )

{

:

}

The result is a huge boost in performance because we are now using an eight-bit unsigned variable (that can be manipulated directly) vs. a signed 16-bit quantity that will typically be handled by a library call. Note also that no penalty exists for coding this way on most big machines (with the exception of some RISC processors. Furthermore, a strong case exists for doing this on all machines. Those of you who know Pascal are aware that when declaring an integer variable, it's possible, and normally Desirable, to Specify The Allowable Range That The Integer Can Take on. for example: type loopindex = 0..9;

Var J loopIndex;

Upon rereading the code later, you'll have additional information concerning the intended use of the variable. For our classical C code above, the variable int j may take on values ​​of at least -32768 to 32767. For the case in which we have uchar j, we inform others that this variable is intended to have strictly positive values ​​over a restricted range. Thus, this simple change manages to combine tighter code with improved maintainability-not a bad combination.

ENUMERATED TYPES

The use of enumerated data types was a welcome addition to ANSI C. Unfortunately, the standard calls for the underlying data type of an enum to be an int. Thus, on many compilers, declaration of an enumerated type forces the compiler to generate 16- bit signed code, which, as I've mentioned, is extremely inefficient. This is unfortunate, especially as I have never seen an enumerated type list go over a few dozen elements, such that it could easily be fit in a UCHAR. To overcome This Limitation, Several Options Exist, None of Which IS Palatable:

? Check your compiler documentation. It may allow you to specify via a command line switch that enumerated types be put into the smallest possible data type. The downside is, of course, compiler-dependant code? Accept the inefficiency as an acceptable trade-off For readability

DISPENSE with ENUMERATED TYPES AND RESORT to LISTS OF MANIFEST CONSTANTS

Integer Promotion Rules

The integer promotion rules of ANSI C are probably the most heinous crime committed against those of us who labor in the eight-bit world. I have no doubt that the standard is quite detailed in this area. However, the two most important rules in practice Are The Following:

? Any Expression Involving Integral Types Smaller Than An Int Have All The Variables Automatical Promoted to Int

? Any function call That Passes An Integral Type SMALLER THAN An Int Automatical Promotes The Variable To Another Reason for Using Function Prototyping

The Key Word Here is Automatic or. Unless You Take Explicit Steps, The Compiler Is Unlikely To Do What You Want. Consider The Following Code Fragment:

CHAR A, B, RES;

:

RES = a b;

The compiler will promote a and b to integers, perform a 16-bit addition, and then assign the lower eight bits of the result to res. Several ways around this problem exist. First, many compiler vendors have seen the light, and allow you To disable the ANSI Automatic Integer Promotion Rules. However, You're The Stuck with Compiler-Dependant Code.

Alternatively, you can resort to very clumsy casting, and hope that the compiler's optimizer works out what you really want to do The extent of the casting required seems to vary among compiler vendors As a result, I tend to go overboard:.. Res = (CHAR) (CHAR) A (CHAR) B);

With Complex Expressions, The Result Can Be Hideous.

More Integer Promotion Rules

A third integer promotion rule is often overlooked concerns expressions that contain both signed and unsigned integers. In this case, signed integers are promoted to unsigned integers. Although this makes sense, it can present problems in our eight-bit environment, where the unsigned that Integer rules. for example:

Void Demo (Void)

{

UINT a = 6;

INT b = -20;

(A B> 6)? PUTS ("More Than 6"): PUTS ("Less Than Or Equal TO 6);

}

If you run this program, you may be surprised to find that the output is "More than 6." This problem is a very subtle one, and is even more difficult to detect when you use enumerated data types or other defined data types that evaluate To a signed integer data type. Using The Result of a function call in an expression is also problem.

The good news is that in the embedded world, the percentage of integral data types that must be signed is quite low, thus the potential number of expressions in which mixed types occur is also low. The time to be cautious is when reusing code that was Written by Someone Who Didn't Believe in Unsigned Data Types.

FLOATING-POINT TYPES

Floating-point arithmetic is required in many applications. However, since we're normally dealing with real-world data whose representation rarely goes beyond 16 bits (a 20-bit atod on an eight-bit machine is rare), the requirements for double -precision arithmetic are tenuous, except in the strangest of circumstances. Again, the ANSI people have handicapped us by requiring that any floating-point expression be promoted to double before execution. fortunately, a lot of compiler vendors have done the sensible thing, and simply defined doubles to be the same as floats, so that this promotion is benign. Be warned, however, that many reputable vendors have made a virtue out of providing a genuine double-precision data type. The result is that unless you take great care , you may end up computing values ​​with ridiculous levels of precision, and paying the price computationally. If you're considering a compiler that offers double-precision math, study the documentation carefully to ensure that The AUTOMATIC PROMOTION. IF The Automatic Promotion. If The Automatic Promotion. If ISN ', Look for Another Compiler.

While we're on this topic, I'd like to air a pet peeve of mine. Years ago, before decent compiler support for eight-bit machines was available, I would code in assembly language using a bespoke floating-point library. This library was always implemented using three-byte floats, with a long float consuming four bytes. I found that this was more than adequate for the real world. I've yet to find a compiler vendor that offers this as an option. My guess is that the marketing people insisted on a true ANSI floating-point library, the real world be damned. As a result, I can calculate hyperbolic sines on my 68HC11, but I can not get the performance boost that comes from using just a three- byte float.Having moaned about the ANSI-induced problems, let's turn to an area in which ANSI has helped a lot. I'm referring to the key words const and volatile, which, together with static, allow the production of better code.

Key Words

...........................

Static variables

When Applied to Variables, Static Has Two Primary Functions. The First and MOST Common Use IS To Declare A Variable That Doesn't Disappear Between Successive Invocations of A Function. For Example:

Void Func (Void)

{

Static uchar state = 0;

Switch (state)

{

:

}

}

In this case, the use of static is mandatory for the code to work.

The second use of static is to limit the scope of a variable. A variable that is declared static at the module level is accessible by all functions in the module, but by no one else. This is important because it allows us to gain all the performance benefits of global variables, while severely limiting the well-known problems of globals. As a result, if I have a data structure which must be accessed frequently by a number of functions, I'll put all of the functions into the same module and declare the structure static. Then all of the functions that need to can access the data without going through the overhead of an access function, while at the same time, code that has no business knowing about the data structure is prevented from accessing it. This technique is an admission that directly accessible variables are essential to gaining adequate performance on small machines.A few other potential benefits can result from declaring module level variables static (as opposed to leaving the m global). Static variables, by definition, may only be accessed by a specific set of functions. Consequently, the compiler and linker are able to make sensible choices concerning the placement of the variables in memory. For instance, with static variables, the compiler / linker may choose to place all of the static variables in a module in contiguous locations, thus increasing the chances of various optimizations, such as pointers being simply incremented or decremented instead of being reloaded. in contrast, global variables are often placed in memory Locations That Are Designed To Optimize The Compiler's Hashing Algorithms, Thus Eliminating Potential Optimizations.

Static functions

A static function is only callable by other functions within its module. While the use of static functions is good structured programming practice, you may also be surprised to learn that static functions can result in smaller and / or faster code. This is possible because the compiler knows at compile time exactly what functions can call a given static function. Therefore, the relative memory locations of functions can be adjusted such that the static functions may be called using a short version of the call or jump instruction. For instance, the 8051 supports both an ACALL and an LCALL op code. ACALL is a two-byte instruction, and is limited to a 2K address block. LCALL is a three-byte instruction that can access the full 8051 address space. Thus, use of static functions gives The Compiler The Opportunity To Use An Acall Where OtherWise It Might Use An LCall.The Potential Improvements Are Even Better, in Which The Compiler Is Smart Enough To Replace Calls with Jumps. For example:

Void Fa (Void)

{

:

FB ();

}

Static void FB (Void)

{

:

}

In this case, because function fb () is the last line of function fa (), The Compiler CAN Substitute a call with a jump. Since fb () is static, and the compiler knows its exact distance from fa (), The Compiler Can Use The Shortest Jump Instruction. for the Dallas DS80C320, this Is An SJMP INSTRUCTION (Two Bytes, Three Cycles) vs. an LCall (Three Bytes, Four Cycles).

ON A Recent Project of Mine, Rigorous Application of The Static Modifier To Functions Resulted In About a 1% Reduction In Code Size. When Your EPROM IS 95% Full (The Normal Case), A 1% Reduction is Most Welcome!

A final point concerning static variables and debugging: for reasons that I do not fully understand, with many in-circuit emulators that support source-level debug, static variables and / or automatic variables in static functions are not always accessible symbolically As a result. , I TEND TO Use the Following Construct in My Project-Wide Include File: #ifndef ndebug

#define static

#ELSE

#define static static

#ENDIF

I THEN USE STATIC VARIABLES, SO THAT WHILIN Debug Mode, I Can Guarantee Symbolic Access To The Variables.

Volatile Variables

A volatile variable is one whose value may be changed outside the normal program flow. In embedded systems, the two main ways that this can happen is either via an interrupt service routine, or as a consequence of hardware action (for instance, a serial port status register updates as a result of a character being received via the serial port). Most programmers are aware that the compiler will not attempt to optimize a volatile register, but rather will reload it every time. The case to watch out for is when compiler vendors offer extensions for accessing absolute memory locations, such as hardware registers. Sometimes these extensions have either an implicit or an explicit declaration of volatility and sometimes they do not. The point is to fully understand what the compiler is doing. If you do not You May End Up Accessing a Volatile Variable WHEN You Don't Want To And Vice Versa. for Example, The Popular 8051 Compiler From Keil Offers Two Ways of Accessing A Specific Memory Location. . The first uses a language extension, _at_, to specify where a variable should be located The second method uses a macro such as XBYTE [] to dereference a pointer The "volatility" of these two is different For example:.. UCHAR status_register _at_ 0xE000;

This Method Is SIMPLY A MUCH More Convenient Way of Accessing a Specific Memory Location. However, Volatile IS Not Implied Here. Thus, The Following Code Is Unlikely To Work:

While (status_register)

; / * Wait for status register to clear * /

INSTEAD, One Needs to Use the Following Declaration:

Volatile Uchar status_register _at_0xe000;

The Second Method That Keil Offers Is The Use of Macros, Such as The Xbyte Macro, AS in:

Status_register = xbyte [0xe000];

Here, However, Examination of The Xbyte Macro Shows That Volatile Is Assumed: #define Xbyte (UNSIGNED Char Volatile XData *) 0)

(The xdata is a memory space Qualifier, Which isn't Relevant to the discussion Here and may be ignored.)

Thus, The Code:

While (status_register)

; / * Wait for status register to clear * /

Will Work As You Would Expect in This Case. However, In The Case In Which You wish to access a variable at a specific location That is not volatile, The Use of the xbyte macro is potentially inefficient.

Const variables

THE Keyword Const, The Most Badly Named Keyword In The C Language, Does Not Mean Constant! Rather, It Means "Read Only." In Embedded Systems, There IS A HUGE DIFERENCE, WHICH WILL BECOME CLEAR.

Const Variables vs. Manifest Constants

Many Texts Recommend That Instead of Using Manifest Constants, One Should Use a const variable. For instance:

Const Uchar NOS_ATOD_CHANNELS = 8;

INSTEAD OF

#define nos_atod_channels 8

The rationale for this approach is that inside a debugger, you can examine a const variable (since it should appear in the symbol table), whereas a manifest constant is not accessible. Unfortunately, on many eight-bit machines you'll pay a Significant Price for this benefit. The Two Main Costs Are:

• Compiler Creates A Genuine Variable in Ram to Hold The Variable. On Ram-Limited Systems, this Can Be a significant Penalty

Some Compilers, Recognizing That The Variable IS Const, Will Store The Variable in

ROM.

However, The Variable Is Still Treated As a Variable and is Accessed As Su, Typically Using Some Form Of Indexed Addressing. Compared to Immediate Addressing, this Method Is Normally Much Slower

I Recommend That You eschew The use of const variables on election-bit machineines, Except in The Following Certain Circumstances.const Function Parameters

Declaring Function Parameters Const WHENEVER POSIBLE NOT INLY MAKES for Better, Safer Code, But Also Has The Potential for Generating Tighter Code. This is best illustrate by an example:

Void Output_String (Char * CP)

{

While (* CP)

PUTCHAR (* CP );

}

Void Demo (Void)

{

Char * str = "Hello, World";

Output_string (STR);

IF ('h' == Str [0]) {

Some_function ();

}

}

In this case, there is no guarantee that output_string () will not modify our original string, str As a result, the compiler is forced to perform the test in demo () If instead, output_string is correctly declared as follows..:

Void Output_String (const char * CP)

{

While (* CP)

PUTCHAR (* CP );

}

then the compiler knows that output_string () can not modify the original string str, and as a result it can dispense with the test and invoke some_function () unconditionally. Thus, I strongly recommend liberal use of the const modifier on function parameters.

Const Volatile Variables

We now come to an esoteric Topic. Can a variable be Both Const and Volatile, Andi So, What Does That Mean and how might you use it? The answer is, of course, yes (Why else Would IT Have Been asked?) , and it should be used on any memory location that can change unexpectedly (hence the need for the volatile qualifier) ​​and that is read-only (hence the const). The most obvious example of this is a hardware status register. Thus, returning To The Status_register Example Above, a better declaration for Our status register is:

Const volatile uchar status_reg- ister _at_0xe000;

Typed Data Pointers

We now come to another area in which a major trade-off exists between writing portable code and writing efficient code-namely the use of typed data pointers, which are pointers that are constrained in some way with respect to the type and / or size of memory that they can access. For example, those of you who have programmed the x86 architecture are undoubtedly familiar with the concept of using the __near and __far modifiers on pointers. These are examples of typed data pointers. Often the modifier is implied, based on The Memory Model Being Used. Sometimes The Modifier IS Mandatory, Such as in The Prototype of An Interrupt Handler: Void__interrupt__far cntr_INT7 ();

The requirement for the near and far modifiers comes about from the segmented x86 architecture. In the embedded eight-bit world, the situation is often far more complex. Microcontrollers typically require typed data pointers because they offer a number of disparate memory spaces, each of which may require the use of different addressing modes. The worst offender is the 8051 family, with at least five different memory spaces. However, even the 68HC11 has at least two different memory spaces (zero page and everything else), together with the EEPROM Pointers to Which Typically Require An Address Space Modifier.

The most obvious characteristic of typed data pointers is their inherent lack of portability They also tend to lead to some horrific data declarations For example, consider the following declaration from the Whitesmiths 68HC11 compiler..:

@dir int * @dir zpage_ptr_to_zero_page;

.. This declares a pointer to an INT However, both the pointer and its object reside in the zero page (as indicated by the Whitesmith extension, @dir) If you were to add a const qualifier or two, such as: @dir const INT * @dir const con- stant_zpage_ptr_to_constant_zero_page_data;

then the declarations can quickly become quite intimidating. Conse-quently, you may be tempted to simply ignore the use of typed pointers. Indeed, coding an application on a 68HC11 without ever using a typed data pointer is quite possible. However, by doing so .........................

This area is so critical to performance that all hope of portability is lost. For example, consider two leading 8051 compiler vendors, Keil and Tasking. Keil supports a three-byte generic pointer that may be used to point to any of the 8051 address spaces , together with typed data pointers that are strictly limited to a specific data space. Keil strongly recommends the use of typed data pointers, but does not require it. By contrast, Tasking takes the attitude that generic pointers are so horribly inefficient that it mandates The Use of Typed Pointers (An Argument to Which I am Extreme Sympathetic).

To Get a feel for the magnitude of the difference, consider the folding code, intended for use on AN 8051:

Void main (void)

{

Uchar array [16]; / * array is in the data space by default * /

Uchar Data * PTR = Array; / * Note Use of data qualifier * /

Uchar i;

For (i = 0; i <16; i )

* PTR = I;

}

Using a generic pointer, this code requires 571 cycles and 88 bytes. Using a typed data pointer, it needs just 196 cycles and 52 bytes. (The memory sizes include the startup code, and the execution times are just those for executing main () ) .With these Sorts of Performance Increases, I Recommend, And Paying The Price In Loss Of Portability and Readability.

Use of assert

The assert () macro is comMMONLY USED ON PC Platforms, But Almost Never Used On Small Embedded Systems. There Are SEVERAL REASONS for THIS:

? Many Reputable Compiler Vendors Don't Bother To Supply An Assert Macro

? Vendors That Do Supply The Macro Often Provide IT IN ALMOST USELESS FORM

? Most Embedded Systems Don n't support a stderr to which the error may be printed

................. ..

Before I discuss possible implementations, mentioning why assert () is important (even in embedded systems) is worthwhile. Over the years, I've built up a library of drivers to various pieces of hardware such as LCDs, ADCs, and so on. These drivers typically require various parameters to be passed to them. For example, an LCD driver that displays a text string on a panel would expect the row, the column, a pointer to the string, and perhaps an attribute parameter. When writing the driver , IT Is Oviously Important That The Passed Parameters Are Correct. One Way of Enssuring this is to include code such as this:

Void LCD_WRITE_STR (Uchar Row, Uchar Column, Char * STR, UCHAR ATTR)

{

Row & = Max_Row;

Column & = max_column;

Attr & = ALLOWABLE_ATTRIBUTES;

IF (NULL == Str) return;

/ * The real thing of the driver goes here * /

}

This code clips the parameters to allowable ranges, checks for a null pointer assignment, and so on. However, on a functioning system, executing this code every time the driver is invoked is extremely costly. But if the code is discarded, reuse of the Driver in Another Project Becomes a Lot More Difficult Because Errors in The Driver Invocation Are Tougher To Detect.

The Preferred Solution Is The LibERAL USE OF An Assert Macro. For example:

Void LCD_WRITE_STR (Uchar Row, Uchar Column, Char * STR, UCHAR ATTR)

{

Assert (ROW

Assert (Column

ASSERT (Attr

askERT (STR! = null);

/ * The real thing of the driver goes here * /

}

........................................................................................................................................................

AskERT # 1

This Example Assumes That You Have No Spare Ram, No Spare Port Pins, And Virtually No Rom To Spare. In this case, assert.h becomes:

#ifndef assert_h

#define assert_h

#ifndef ndebug

#define assert (expr) /

IF (expr) {/

While (1); /

}

#ELSE

#define assert (EXPR)

#ENDIF

#ENDIF

Here, if the assertion fails, we simply enter an infinite loop. The only utility of this case is that, assuming you're running a debug session on an ICE, you will eventually notice that the system is no longer running. In which case , breaking the emulator and examining the program counter will give you a good indication of which assertion failed. As a possible refinement, if your system is interrupt-driven, inserting a "disable all interrupts" command prior to the while (1) may be Necessary, Just to Ensure That The System's Failure Is Obvious.assert # 2

This case is the same as assert # 1, except that in # 2 you have a spare port pin on the microcontroller to which an error LED is attached. This LED is lit if an error occurs, thus giving you instant feedback that an assertion has Failed. Assert.h Now Becomes:

#ifndef assert_h

#define assert_h

#define error_led_on () / * Put Expression for Turning LED on here * /

#define interrupts_off () / * Put Expression for Interrupts Off Here * /

#ifndef ndebug

#define assert (expr) /

IF (expr) {/

Error_led_on (); /

INTERRUPTS_OFF (); /

While (1); /

}

#ELSE

#define assert (EXPR)

#ENDIF

#ENDIF

Assert # 3

This example builds on assert # 2. But in this case, we have sufficient RAM to define an error message buffer, into which the assert macro can sprintf () the exact failure. While debugging on an ICE, if a permanent watch point is associated WITH THIS BUFFER, THEN BREAKING THE WIVE INSTANT INFORMATION ON Where The Failure Occurred. Assert.h for this Case Becomes:

#ifndef assert_h

#define assert_h

#define error_led_on () / * Put Expression for Turning LED on here * /

#define interrupts_off () / * Put Expression for Interrupts Off Here * /

#ifndef ndebug

EXTERN CHAR ERROR_BUF [80]; # define assert (expr) /

IF (expr) {/

Error_led_on (); /

INTERRUPTS_OFF (); /

Sprintf (Error_BUF, "Assert Failed:" #expr "(file% s line% d) / n", __file__, (int) __line__); /

While (1); /

}

#ELSE

#define assert (EXPR)

#ENDIF

#ENDIF

Obviously, this Requires That You Define Error_buffer [80] Somewhere Else In your code.

I do not expect that these three examples will cover everyone's needs. Rather, I hope they give you some ideas on how to create your assert macros to get the maximum debugging information within the constraints of your embedded system own.

Heretical Comments

So far, all of my suggestions have been about actively doing things to improve the code quality. Now, let's address those areas of the C language that should be avoided, except in highly unusual circumstances. For some of you, the suggestions that follow will Border on here.

Recursion

Recursion IS A Wonderful Technique That Solves Certain Problems in An Elegant Manner. It Has No Place On AN Eight-Bit MicroController. The Reasons for this Are Quite Simple:

? Recursion relies on a stack-based approach to passing variables. Many small machines have no hardware support for a stack. Consequently, either the compiler will simply refuse to support reentrancy, or else it will resort to a software stack in order to solve the Problem, Resulting in Dreadful Code Quality

Recursion Relies on a "Virtual Stack" That Purportedly Has No Real Memory Constraints. How Many Small Machines CAN Realistical Support Virtual Memory?

If you find yourself using recursion on a small machine, I respectfully suggest that you are either a) doing something really weird, or b) you do not understand the sum total of the constraints with which you're working. If it is the Former, Then Please Contact Me, As I Will Be fascinated to see what you are doing.variable length length argument lists

You should avoid variable length argument lists because they too rely on a stack-based approach to passing variables. What about sprintf () and its cousins, you all cry? Well, if possible, you should consider avoiding the use of these library functions. The Reasons for this Are As Follows:

? If you use sprintf (), Take a Look at the Linker Output and Seeh How much library code it Pulls in one of my compilers, sprintf (), WITHOUT FLOATING-POINT Support, Consumes About 1K. If're Using A Masked Micro with a code space of 8k, this Penalty is huge

• ON Some Compilers, Use of sprintf () Implies The use of a floating-point library

? If the compiler does not support a stack, but rather passes variables in registers or fixed memory locations, then use of variable length argument functions forces the compiler to reserve a healthy block of memory simply to provide space for variables that you may decide to Use. for instance, if Your Compiler Vendor Assums That The maximum number of arguments you can pass is 10, The Compiler Will Reserve 40 Bytes (Assuming Four Bytes Per Longest Intrinsic Data Type)

Fortunately, many vendors are aware of these issues and have taken steps to mitigate the effects of using sprintf (). Notwithstanding these actions, taking a close look at your code is still worthwhile. For instance, writing my own wrstr () and wrint ( ) functions (to ouput strings and ints respectively) generated half the code of using sprintf. Thus, if all you need to format are strings and base 10 integers, then the roll-your-own approach is beneficial (while still being portable). Dynamic Memory Allocation

When You're Programming An Application for A PC, Using Dynamic Memory Allocation Makes Sense. The Characteristics of PCS That Permit and / or Require Dynamic Memory Allocation Include:

? Writing an Application, You May Not Know How much Memory Will Be Available. Dynamic Allocation Provides A Way of GraceFully Handling this Problem

? The PC Has An Operating System, Which Province Memory Allocation Services

? The PC HAS A User Interface, Such That IF An Application Runs Out of Memory, IT Can At Least Tell The User and Attempt A Relative Graceful Shutdown

In contrast, small embedded systems typically have none of these characteristics. Therefore, I think that the use of dynamic memory allocation on these targets is silly. First, the amount of memory available is fixed, and is typically known at design time. Thus static Allocation of all the required and / or available memory May be Done At Compile Time.

Second, The Execution Time Overhead of Malloc (), Free (), And So ON IS Not Only Quite High, But Also Variable, Depending On The Degree of Memory Fragmentation.

Third, use of malloc (), free (), and so on consumes valuable EPROM space. And lastly, dynamic memory allocation is fraught with danger (witness the recent series from PJ Plauger on garbage collection in the January 1998, March 1998, and April 1998 Issues Of ESP) .consequently, I Strongly Recommend That You Not Use Dynamic Memory Allocation on Small Systems.

Final thoughts

I have attempted to illustrate how judicious use of both ANSI constructs and compiler-specific constructs can help generate tighter code on small microcontrollers. Often, though, these improvements come at the expense of portability and / or readability. If you are in the fortunate position of being able to use less efficient code, then you can ignore these suggestions. If, however, you are severely resource-constrained, then give a few of these techniques a try. I think you'll be pleasantly surprised. esp

Nigel Jones Received A Degree IN Engineering from Brunel University,

London

He Has Worked In The Industrial Control Field, Both in The

U.k.

And the

U.S.

..........................................

转载请注明原文地址:https://www.9cbs.com/read-17180.html

New Post(0)