CCS C Software and Maintenance Offers
FAQFAQ   FAQForum Help   FAQOfficial CCS Support   SearchSearch  RegisterRegister 

ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

CCS does not monitor this forum on a regular basis.

Please do not post bug reports on this forum. Send them to CCS Technical Support

10-bit ADC on pic 18F4550
Goto page 1, 2  Next
 
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion
View previous topic :: View next topic  
Author Message
arthuki



Joined: 03 May 2017
Posts: 3

View user's profile Send private message AIM Address

10-bit ADC on pic 18F4550
PostPosted: Wed May 03, 2017 1:44 pm     Reply with quote

Hello everyone, I have a problem with my pic 18f4550 and don't know what to do. It is working as if the ADC had an 8-bit resolution, but I want it to work with an 10-bit resolution.

The code is simple:

Code:
float value;

void main()
{
#use delay (clock = 20000000)

   SETUP_ADC(ADC_CLOCK_INTERNAL);   
   SETUP_ADC_PORTS(AN0_TO_AN2); 
   SET_ADC_CHANNEL(0);  conversao

   while(1)
   {
   SET_ADC_CHANNEL(1); 
   delay_us(20);
   value = READ_ADC();
   printf("%f   ",value);
   delay_ms(1000);
     
   }

}


and in my "18f4550.h" file I added the following code right at the beginning:

Code:
#device ADC=10


I am using proteus ISIS to simulate the circuit in which I apply 5V to analog pin 1 of the pic and am able to see the output on the virtual terminal.

When I run the simulation the output is 255 (showing the pic is working with an 8-bit resolution ADC), instead of 1023.

Is there anything I did wrong? Did I miss anything?

Any help you be great. Thank You!


Last edited by arthuki on Wed May 03, 2017 3:34 pm; edited 1 time in total
Ttelmah



Joined: 11 Mar 2010
Posts: 19552

View user's profile Send private message

PostPosted: Wed May 03, 2017 1:53 pm     Reply with quote

It's probably actually pointless to go to 10bit...
First, ADC_CLOCK_INTERNAL will degrade the accuracy of the ADC. Set it up with the correct divisor instead. Will give a better result. The internal clock is not recommended unless you are putting the processor to sleep for the conversion, when operating above 1MHz (read the data sheet).
Then you are using the supply as your Vref. This is unlikely to even get close to 8biit accuracy or repeatability. Remember the supply will have quite a few mV of ripple on it....
If you genuinely want to work at 10bit, then you really must use a smooth accurate reference. Proteus won't show you the real noise the ADC will have running from the supply.
Now your code can't work as posted. You are reading the reading into a variable called 'valor' and printing 'value'. You don't show the declaration of valor, or the transfer of data between these variables, but the obvious reason it'd give an 8bit response is that this is declared as an integer.
PCM programmer



Joined: 06 Sep 2003
Posts: 21708

View user's profile Send private message

PostPosted: Wed May 03, 2017 1:54 pm     Reply with quote

If you do it like this, it should work. I can't test it in hardware right now,
but I looked at the .LST file and it is reading ADRESH and ADRESL, and
converting it to a float.
Code:
#include <18F4550.h>
#device ADC=10
#fuses INTRC_IO,NOWDT,PUT,BROWNOUT,CPUDIV1
#use delay(clock=4M)
#use rs232(baud=9600, UART1, ERRORS)

float value;

void main()
{

   SETUP_ADC(ADC_CLOCK_INTERNAL); 
   SETUP_ADC_PORTS(AN0_TO_AN2); 
   SET_ADC_CHANNEL(0);

   while(1)
   {
   SET_ADC_CHANNEL(1); 
   delay_us(20);
   value = READ_ADC();
   printf("%f   ",value);
   delay_ms(1000);
     
   }

}
arthuki



Joined: 03 May 2017
Posts: 3

View user's profile Send private message AIM Address

PostPosted: Wed May 03, 2017 3:41 pm     Reply with quote

The confusion between "valor" and "value" was just mistyping when I was translating the code. Thank you for the warning. I have edited the post for future viewers.

Ttelmah

You said "Set it up with the correct divisor instead". What would it be?

Also, you said "Then you are using the supply as your Vref". I don't understand much about what you said, as I don't have much experience with PIC microcontrollers.

The 5v I mentioned were from an external DC source.

Sorry for the noobie questions

Thank you guys again
temtronic



Joined: 01 Jul 2010
Posts: 9246
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Wed May 03, 2017 4:09 pm     Reply with quote

1) as for the ADC clock, please read the ADC section for the PIC. You'll find a chart that shows valid adc clocks based upon CPU speed. Yes, it takes some reading but once you understand how the ADC section works, it'll make sense.

2) Vref is the voltage the ADC used as a reference for all the measurements. Vref MUST be rock stable. Using Vdd (PIC power supply) is NOT recommended for several reasons. ANY change in VDD will result in wrong ADC readings. while you may think VDD is say 5.00 all the time, it does vary depending on the 'load' of the PIC and peripherals. Say you energize a relay before you read the ADC. VDD will drop to say 4.85 volts. The PIC will still operate BUT Vref is now 4.85 not the 5.00 you think it is!
With 10, 12, 16 bit ADC units you must use a 'voltage reference' device. Typically they are 4.096, perfect for 12 bit ADC..1 bit = 1mv. Use a 1.024 vref device and 1bit = 1mv. The point is Vref devices MUST be stable.

3) In order to get accurate, repeatable readings you'll also have to design proper ground planes, wiring layout, filtering, etc. It is challenging to get 10 bits to work and a LOT of work to get 16 bits ! BTDT.

Jay
Ttelmah



Joined: 11 Mar 2010
Posts: 19552

View user's profile Send private message

PostPosted: Thu May 04, 2017 12:38 am     Reply with quote

Have a look at this thread:

<http://www.ccsinfo.com/forum/viewtopic.php?t=56005>

There are several others on the forum, but this one is nice, since the original poster came back, and confirmed just how much things improved once he used a proper Vref. Smile
arthuki



Joined: 03 May 2017
Posts: 3

View user's profile Send private message AIM Address

PostPosted: Thu May 04, 2017 9:21 am     Reply with quote

Thank you all for the answers.

I thought it would be much simpler to use 10 bits on the ADC. But now I have learned I need some circuitry to make it work as well. I think I'll just use the 8-bit ADC since I don't need a big resolution.

Thank you
alyeomans



Joined: 26 Feb 2014
Posts: 24

View user's profile Send private message

PostPosted: Thu May 18, 2017 6:38 am     Reply with quote

I've had a few projects with the PIC18F4620 which is quite similar.

My experience with this chips 10bit ADC contrary to Ttelmah has been good. I have been able to quite accurately measure battery voltage to 0.1V on a 12V battery using internal clock and reference and have pushed it successfully to 0.02V for a 7.2V battery charger in a sensitive negative delta voltage cut off.

I've just pieced together the code below I use but have not tested as I don't have a 4620 to hand. I use inductors and decoupling caps where ever possible to help reduce switching noise in the circuit.

Code:

#include <18F4620.h>
#device adc=10

//FUSES
#FUSES NOWDT                    //No Watch Dog Timer
#FUSES INTRC                    //Internal RC Osc, with CLKOUT
#FUSES NOPROTECT                //Code not protected from reading
#FUSES NOBROWNOUT
#FUSES NOPUT                    //No Power Up Timer
#FUSES NODEBUG                  //No Debug mode for ICD
#FUSES NOLVP                    //No low voltage programming
#FUSES NOWRT                    //Program memory not write protected
#FUSES MCLR                     //Master Clear pin enabled

void main(void) {
   int16 adc1value;

   setup_oscillator(OSC_INTRC | OSC_8MHZ);
   set_adc_channel(1);

   while(1){
      delay_ms(100);         //stabilize time
      adc1value = read_adc();
      printf("ADC=\t%lu\n\r",adc1value);
   }
}


Cheers
Al
_________________
I type therefore I press buttons
Ttelmah



Joined: 11 Mar 2010
Posts: 19552

View user's profile Send private message

PostPosted: Thu May 18, 2017 7:07 am     Reply with quote

There is a big difference between 'resolution', and 'accuracy'....

You won't have any where near the 'accuracy' that you think.

You need to actually understand the limitations of things.

10bit 'resolution' is quite easy. 10bit 'accuracy' is not. having the resolution without accuracy doesn't actually give you any worthwhile data...
asmboy



Joined: 20 Nov 2007
Posts: 2128
Location: albany ny

View user's profile Send private message AIM Address

PostPosted: Thu May 18, 2017 7:32 am     Reply with quote

I have used Vref as an A/D reference - even with 12 bit PICS
with one bit accuracy - no sweat..
It can be done and perform to full spec.
BUT it takes care in design, construction and coding.

1- i use a dedicated , trimable Vdd --
LDO 5 v regulator for the pic with a 100 uf Tantalum cap
at the PIC Vdd pin.

2- NO HIGH pin current I/O on said PIC

3- lastly i use this method of reading the adc

http://www.ccsinfo.com/forum/viewtopic.php?t=50320&highlight=olympic
Ttelmah



Joined: 11 Mar 2010
Posts: 19552

View user's profile Send private message

PostPosted: Thu May 18, 2017 8:58 am     Reply with quote

Though not as badly as most posters here, you still 'mislead' yourself....

First of all the ADC will have an offset error. For 12bit PIC's typically up to 2Lsb!... This is how far 'out' the zero reading from the ADC actually can be.
Then there will be gain error. This is the error at the top of the scale. For a 12bit chip typically 4Lsb. Ouch.
Then there is the non-linearity. How far away from a nice straight line the ADC response may be. Typically at least one bit and on most of the 12bit PIC's a couple of bits.

So even if your reference was perfect, and everything else ideal, you probably have at least a couple of bits of error at some points in the range... :(

Now you avoid the other problems by keeping noise low, and calibrating the Vdd. however you mislead again with the supply. The best 'super accurate' LDO regulators will still vary by up to perhaps 0.5% with temperature etc.
Microchip warrants the monotonicity of the ADC (so an increase in voltage will always result in an increasing reading - and vice versa). However even if you have 10 identical units, and calibrate all to the same voltage, you will find that there is a noticeable difference between the units as the voltage moves across the span.

A 10/12bit reading. Easy. Just set the system to use 10/12bits.
A 10/12 resolution. Harder. However depending on what you mean by this, yes.
A 10/12bit repeatability. Short term, for a single unit with multi-point calibrations, similar to the resolution. However over the whole scale and without this, no.
A 10/12bit accuracy. No. For 10bit with a good Vref you will get close. From the supply, with a hand calibrated supply 'short term', something close to 9bits may be achieved, but genuinely having the situation where you can take 10 units, feed them with 1.000, 2.000, 3.000 volts as an input, and have them read 1.000, 2.000, 3.000 with the same result from all units, and the reading staying the same over a good range of temperatures and time, is just not going to happen I'm afraid.

I doubt if your units are actually giving you more than perhaps 10bits repeatably, especially since this is the real accuracy that Microchip warrants for most of their the 12bit PIC ADC's.

Pull Texas SLAA687, which gives an overview of ADC errors. Feed some of the PIC ADC's into this. Then add the errors from your 'reference' source, and be prepared for just how bad your values really are....
asmboy



Joined: 20 Nov 2007
Posts: 2128
Location: albany ny

View user's profile Send private message AIM Address

PostPosted: Thu May 18, 2017 9:04 am     Reply with quote

All i can say in defense of the method described is
that end results of the design have been excellent
in maintaining closed loop precision temperature control
of better than a 1/4 degree around 37c where a
spec requirement is 1 degree or better.

admittedly - all 12 bits of the converter -
was analog "gained and offset"
such that the span of 0 to 80 c is
read as 0 to 5 vdc -yielding
about 32 bits / degree C

circuit performance described had to be audited and verified
- its not my own test result quoted above.
And in a high noise mains power modulator
switching environment at that...
Ttelmah



Joined: 11 Mar 2010
Posts: 19552

View user's profile Send private message

PostPosted: Thu May 18, 2017 9:28 am     Reply with quote

So, think about it. You are actually getting about 1/4 degree and 32 counts per degree, so about the bottom 8 counts (3 bits) are not really significant. About 9 bits actually being used, exactly what I predicted.....

I spent several years working for a lab, where we tested kit for certain certifications. It was amazing how little actually met what was claimed.
temtronic



Joined: 01 Jul 2010
Posts: 9246
Location: Greensville,Ontario

View user's profile Send private message

PostPosted: Thu May 18, 2017 9:29 am     Reply with quote

Getting 'high bit' ADCs to perform is a mix of science and magic. The 'science' part is understanding the datasheets, designing a PCB properly, coding to take care of most quirks. The 'magic' portion is in dealing with EMI, cellphones, crosstalk, welders, etc. that randomly hit your ADC.
35 years ago I got great, repeatable performace from 32 channel, 16 bit ADCs INSIDE optical emission spectrometers that generated LOTS of EMI as well as the 4 other units in the same room.
What looked 'fine' on paper, didn't work in the real World, a LOT of tweaks here, add stuff there, cross your fingers and MONTHS of 24/7 testing.
Yes, it can be done....all it takes is time, patience and a rabbits foot.

Jay
newguy



Joined: 24 Jun 2004
Posts: 1909

View user's profile Send private message

PostPosted: Thu May 18, 2017 9:32 am     Reply with quote

Your application is probably part of the reason why your real world results are good: the range of practical values are pretty "tight". Over your small practical temperature range, accuracy, linearity and repeatability are all good simply because you're not getting readings near the edge of the AD's input range.

At my previous job, I inherited a couple of boards that employed a processor with a 12 bit A/D which was getting readings (welding current) from a 900A full scale hall effect current sensor. Anyone can very quickly come up with the maximum possible precision (under ideal conditions): 900A/4096 = 0.22A. Again, that's the best it could possibly be, ignoring noise, nonlinearity, temperature swings, etc. Well one of my many predecessors devised some weird in-house quasi-floating point math and was using that to calculate and report currents down to 0.01A (calculations were to a higher resolution, but were rounded to 0.01A for reporting purposes). And that was the input to a PID routine that, due to an error by the same person who came up with the quasi-FP library, was actually a PI routine because they didn't know what a derivative was. And the quasi-FP was actually unstable because if you, for example, tried to multiply 20.0 x 1.0 x 1.0 x 1.0 (etc), the result wouldn't be anywhere near to 20.

Suffice to say that it didn't work "in the real world" all that well. I completely re-wrote that firmware from scratch using the standard integer math I think we're all intimately familiar with and eliminating the quasi-FP. I also fixed the PID, once I appropriately scaled the constants for use with the integer values. The instability went away (which manifested during calibration), the PID control was actually PID (and would thus respond to a step input whereas the previous version wouldn't because it lacked a derivative term), and real world performance was night & day different (in a good way).

I honestly don't understand why computer science types (software engineers) aren't taught the basic practical limitations of real hardware. Honestly, how on earth do you expect to get resolution better than what the hardware limitation is? I almost forgot: "calibration" of these devices never involved measuring a known welding current. "Calibration" involved measuring a simulated hall effect output. ....I know. I stopped trying to get them to see that our "cal" step was useless. I think I can honestly train my coffee cup to talk before I could get them to understand why what we were doing wasn't a "cal" at all.
Display posts from previous:   
Post new topic   Reply to topic    CCS Forum Index -> General CCS C Discussion All times are GMT - 6 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group