CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to CCS Technical Support

interrupt routines

 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
davt



Joined: 07 Oct 2003
Posts: 66
Location: England

View user's profile Send private message

interrupt routines
PostPosted: Fri Jan 20, 2012 4:02 am     Reply with quote

Hi all, I have read on numerous occasions on this forum that it is best to get out of an intterupt routine as soon as possible, ie set a flag and get the "hell out", almost as if it is an unsafe place to be!. Can you explain why this is so important, as far as I can see the program is only performing a "jump" to carry out a task and then returns. I designed a temperature/ fan controller and have switch statements in interrupt routines display refresh in timer based interrupts. It has functioned perfectly well for 5 years now no problems, I obviously have been lucky!!
What is the fuss?
RF_Developer



Joined: 07 Feb 2011
Posts: 839

View user's profile Send private message

Re: interrupt routines
PostPosted: Fri Jan 20, 2012 6:18 am     Reply with quote

davt wrote:
I obviously have been lucky!!


Its not really about "luck". Its about what is nowadays called "responsiveness". Some applications can work perfectly well with lengthy ISRs these are generally ones which have few sources of interrupts, and have slow interrupt repitition rate. Some applications are entirely "interrupt driven" and most of their processing is done in ISRs. As embedded apps get bigger and have more functionality then the need for short, slick ISRs grows. My PIC apps measure maybe two temperatures, eight currents, control four voltages, send and receive possibly hundreds of CAN message per second, keep track of real time and protect RF components many, many times more expensive than the PICS. They have to buffer incoming CAN messages 16 deep to cope at some times. My main loop time is never more than 1.5ms, and I've worked hard to acheive that. I'm not about to let some ISR push that out more than 100us or so.

The key point to remember is that in very few processors are interrupts interruptible. Main code can be interrupted, but once the processor is serving an interrupt, generally its not practical, or in many cases possible, to deal with any other interrupt and the processor must complete the processing in the interrupt routine before it can deal with anything else. Nested interrupts are possible, even on most PICs, but managing that is a nightmare, particularly with regard to stack usage and context management.

In an application which only has one or two interrupts and these happen infrequently then thats not an issue and you can do a lot of processing in your ISR. Fine, no trouble. When there a lot going on, lots of different interrupts at unpredictable rates something, somewhere is going to have to wait. Now that's not so bad in some cases, but can literally be fatal in others. Consider a fly-by-wire control system. Literally lives may be at stake if an interrupt, or even some main processing doesn't happen at the correct time. Equipment may be damaged if some vital protection code doesn't get run... all because somebody put a delay_ms(1000) or a tenth of a second's worth of processing in an ISR. Even Apollo 11 LEM nearly got aborted on lunar approach due to the guidance computer being overloaded with interrupts from a radar (used for docking) that wasn't actually needed for the landing!

In the past, and that's really still the case with most PICs which, lets face it, are not fast by any modern standards, ISR execuction time was a greater percentage of elapsed time that it is on say the any windows capable processor. ISR processing too vital processing power away from main application code. Just servicing a clock interrupt could take several percent of many processors time. not good. So ISRs traditionally had to be really short. They have to respond to realtime events, indeed interrupts are the key enabler of all realtime systems. Realtime events come from the outside world which waits for no processor. You can't interrupt the world and put it on hold for a while, unlike a processor. So traditionally ISRs simply serviced the cause of the interrupt and left anything else to main code. The classic example is of IO interrupts, say a serial port. You get an interrupt when a character has been received. You had better deal with it BEFORE the next one comes in, otherwise you've lost it. hardware bufffering (FIFOs) have helped, but comms speeds have increased, at one time 9600baud, or about 1000 characters, and hence RX interrutps, was the absolute maximum, now every one accepts 152000 baud as standard. So all serial receive ISRs do is read the character and stuff it into a buffer. No processing, no command parsing, possibly some protocol type stuff, e.g. detection and flagging of end of packet at most, but nothing more. Even CRC calcuations can and arguably should be done in main code once a complete packet has been detected. I confess on the ARM I didn't do that, I did CRCs on the fly in ISRs, but then I had half-decent 32 bit power available to me.

So historically ISR had to be short and simple. Less so today, but its still important. In the PIC context, the real killer is delay_ () calls. These simply eat up processor power for no gain, and kill all other processing. I think they should be not allowed by the compiler in ISRs. The occasional NOP (no op instruction) is OK to give "waits" and "holds", and delay_us() calls probably translate into groups of them anyway.

Another way of looking at it is thinking how you feel when a windows app goes "unresponsive". How frustrating is that? What a waste of your time? Surely there's something else useful you could be doing while waiting for some app to process something or other. Well, its just the same for your embedded code.

RF Developer.
bkamen



Joined: 07 Jan 2004
Posts: 1615
Location: Central Illinois, USA

View user's profile Send private message

PostPosted: Fri Jan 20, 2012 7:07 am     Reply with quote

152000b/s?

Did you mean 115,200b/s? Hehehe...

-Ben
_________________
Dazed and confused? I don't think so. Just "plain lost" will do. :D
RF_Developer



Joined: 07 Feb 2011
Posts: 839

View user's profile Send private message

PostPosted: Fri Jan 20, 2012 7:18 am     Reply with quote

Wibble Embarassed
ckielstra



Joined: 18 Mar 2004
Posts: 3680
Location: The Netherlands

View user's profile Send private message

PostPosted: Fri Jan 20, 2012 7:23 am     Reply with quote

RF Developer nailed it down quite well.

Key things are:
- Responsiveness
- Asynchronous events

In 95% of the PIC applications the main routine is doing the important stuff that the device was designed for. Then, at undefined moments, there is an interrupt triggered when something in the real world happens. This asynchronous nature of interrupts means the main task has to immediately drop whatever it is doing to handle the interrupt.

When the main routine only performs some unimportant task like showing the temperature on an LCD once a second, then you don't care when the interrupt takes a bit longer to finalize. But assume the same product is extended with a 115kbit RS232 input then things change, now you only have about 0.1ms to read the data from the UART or data will get lost!

Interrupts are a great tool but will impact the other parts of your system. Hence the name interrupt is chosen well, the program is being 'interrupted' from its normal work flow.

How much time you can spend in your interrupt routine is totally system dependent. Look from the outside at your system and define for all entities the maximum allowed response time. The smallest value you find is the maximum time your interrupt routine can waste.

I know of 1 or 2 rare applications that do all their processing in the interrupt and where the main function is just an empty endless loop. But as a rule of thumb you will never get a bad system by doing it the other way around: keep ISR's short as possible and do all processing in the main function.

Strong indications of a poor designed ISR are the use of:
- delay_ms
- printf
- gets
- 32-bit multiplication or division
- floating point variables
davt



Joined: 07 Oct 2003
Posts: 66
Location: England

View user's profile Send private message

PostPosted: Fri Jan 20, 2012 7:51 am     Reply with quote

Thanks for those very detailed replies, much appreciated and reassuring!
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group