- **C Programming**
(*http://www.velocityreviews.com/forums/f42-c-programming.html*)

- - **How to read an integer one by one?**
(*http://www.velocityreviews.com/forums/t440439-how-to-read-an-integer-one-by-one.html*)

How to read an integer one by one?Hi,
How can I get a digit from integer for example, 12311, one by one for comparision? Thanks! |

Re: How to read an integer one by one?henrytcy@gmail.com writes:
> Hi, > > How can I get a digit from integer for example, 12311, one by one for > comparision? #define BIG_ENOUGH 1000 int main(void) { unsigned int x = 12311; /* If you want the last digit, its simple: */ unsigned int digit = x % 10; /* Now you can go on and throw away that * digit from the original number: */ x /= 10; /* And you could put it in a loop to get the * digits one after one: while (x > 0) { digit = x % 10; /* do something with it */ x /= 10; } /* If you want them in the same order as they * are written it's a little bit more complicated, * either you do it like above, and save the digits, * and then use them in reverse order, * or you could use the formatting output functions * to create a string and take them from there: */ char buf[BIG_ENOUGH]; /* this must be at top in C90 */ char* ptr = buf; x = 12311; sprintf(buf, "%d", x); /* also consinder snprintf... */ while (*ptr) { unsigned int digit = *ptr - '0'; /* do something with digit */ ++ptr; } return 0; } /Niklas Norrthon |

Re: How to read an integer one by one?Niklas Norrthon <do-not-use@invalid.net> wrote:
> #define BIG_ENOUGH 1000 How do you know this? Richard |

Re: How to read an integer one by one?int x = 12311;
char str[10]; sprintf(str, "%d", x); Now you can use str to access individual digit. --- regards -Prasad |

Re: How to read an integer one by one?rlb@hoekstra-uitgeverij.nl (Richard Bos) writes:
> Niklas Norrthon <do-not-use@invalid.net> wrote: > > > #define BIG_ENOUGH 1000 > > How do you know this? I don't, and later I also say consider snprintf. (But platforms with ints larger than 3000 bits are not too common yet...) /Niklas Norrthon |

Re: How to read an integer one by one?Richard Bos said:
> Niklas Norrthon <do-not-use@invalid.net> wrote: > >> #define BIG_ENOUGH 1000 > > How do you know this? By definition. :-) -- Richard Heathfield "Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk email: rjh at above domain (but drop the www, obviously) |

Re: How to read an integer one by one?Niklas Norrthon <do-not-use@invalid.net> wrote:
> rlb@hoekstra-uitgeverij.nl (Richard Bos) writes: > > > Niklas Norrthon <do-not-use@invalid.net> wrote: > > > > > #define BIG_ENOUGH 1000 > > > > How do you know this? > > I don't, and later I also say consider snprintf. > > (But platforms with ints larger than 3000 bits are > not too common yet...) Platforms on which you don't want to waste 1000 bytes are, though. There is a good method for getting a reasonable, not too low and only barely too high, upper bound on the length of a decimally expanded int. It is, however, left as an exercise for the OP. Richard |

Re: How to read an integer one by one?rlb@hoekstra-uitgeverij.nl (Richard Bos) writes:
> There is a good method for getting a reasonable, not too low and only > barely too high, upper bound on the length of a decimally expanded int. > It is, however, left as an exercise for the OP. There are several. Which one you choose depends on the relative scarcity of CPU cycles and memory. This one is slightly wasteful of memory, but is computed entirely at compile time: #include <limits.h> char number[1+(sizeof(int)*CHAR_BIT)/3+1]; The resulting array will be large enough to store the null-terminated decimal representation of any integer in the range (INT_MIN,INT_MAX). Proof of this is left as an exercise for the reader. For extra credits, show how and why the code can or must be modified to accomodate the range (UINT_MIN,UINT_MAX) instead. DES -- Dag-Erling Smørgrav - des@des.no |

Re: How to read an integer one by one?Dag-Erling Smørgrav wrote:
> rlb@hoekstra-uitgeverij.nl (Richard Bos) writes: >> There is a good method for getting a reasonable, not too low and only >> barely too high, upper bound on the length of a decimally expanded int. >> It is, however, left as an exercise for the OP. > > There are several. Which one you choose depends on the relative > scarcity of CPU cycles and memory. > > This one is slightly wasteful of memory, but is computed entirely at > compile time: > > #include <limits.h> > char number[1+(sizeof(int)*CHAR_BIT)/3+1]; > > The resulting array will be large enough to store the null-terminated > decimal representation of any integer in the range (INT_MIN,INT_MAX). > Proof of this is left as an exercise for the reader. Actually, please show why this works, especially why this works for INT_MIN. It's easy enough to see N / 3 + 1 can approximate ceil(log10(2^N)), but I can't get the details quite right for signed integers. I'm sure I'm overlooking something obvious. > For extra credits, show how and why the code can or must be modified to > accomodate the range (UINT_MIN,UINT_MAX) instead. There is no UINT_MIN. It could have been defined simply as 0, but it isn't. The code needs no modification (it always allocates enough storage) but easily allows better approximations if we only consider unsigned integers. For example, (sizeof(int) * CHAR_BIT * 28) / 93 + 1 will give an approximation that is precise for all integer types smaller than 93 bits (save that it may still be wasteful insofar as it cannot take into account padding bits). If we ignore the possibility of overflow through padding bits (which would be pathological), the best approximation is (sizeof(int) * CHAR_BIT * 643) / 2136 + 1. This wastes no space for integers with up to 15,436 bits (again, except for padding bits), which is probably more than we'll ever need. S. |

Re: How to read an integer one by one?Skarmander said:
> Dag-Erling Smørgrav wrote: >> This one is slightly wasteful of memory, but is computed entirely at >> compile time: >> >> #include <limits.h> >> char number[1+(sizeof(int)*CHAR_BIT)/3+1]; That looks familiar, but... well, enough of that later. >> The resulting array will be large enough to store the null-terminated >> decimal representation of any integer in the range (INT_MIN,INT_MAX). >> Proof of this is left as an exercise for the reader. > > Actually, please show why this works, especially why this works for > INT_MIN. It's easy enough to see N / 3 + 1 can approximate > ceil(log10(2^N)), but I can't get the details quite right for signed > integers. I'm sure I'm overlooking something obvious. I can explain it easily enough, for the simple reason that I invented it. :-) (I'm quite sure there are other people who've invented it too, and long before I did, but at any rate I derived it independently, and so I know why it works.) An int comprises sizeof(int) * CHAR_BIT bits. Since three bits can always represent any octal digit, and leaving signs and terminators aside for a second, it is clear that (sizeof(int) * CHAR_BIT) / 3 characters are sufficient to store the octal representation of the number (but see below). Since decimal can represent more efficiently than octal, what's good enough for octal is also good enough for decimal. Now we add in 1 for the sign, and 1 for the null terminator, and that's where the above expression comes from. BUT: consider the possibility that the integer division truncates (which it will do if sizeof(int) * CHAR_BIT is not a multiple of 3). Under such circumstances, you could be forgiven for wanting some bodge factor in there. That's why I use: char number[1 + (sizeof(int) * CHAR_BIT + 2) / 3 + 1]; The + 2 is always sufficient to counter the effects of any truncation after division by 3, but doesn't inflate the result by more than one character at most. Now let's look at a typical case, INT_MIN on a 32-bit int system with 8-bit chars. sizeof(int) on such a system is 4, and CHAR_BIT is 8, which gives us 32 + 2 = 34 for the parenthesised expression. Dividing this by three gives us 11 (integer division, remember), and then we add 1 for the sign and 1 for the null. That's 12 data bytes and a null byte. INT_MIN on that system would be no mag-higher than -2147483648 which is just 11 data bytes in length - so it turns out that our fudge factor wasn't strictly necessary on this occasion. (That's because decimal is, as I said, a little better than octal at representing numbers concisely.) -- Richard Heathfield "Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk email: rjh at above domain (but drop the www, obviously) |

All times are GMT. The time now is 07:31 PM. |

Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.

SEO by vBSEO ©2010, Crawlability, Inc.