Velocity Reviews > How do I convert an int to binary form?

# How do I convert an int to binary form?

cedarson@gmail.com
Guest
Posts: n/a

 02-10-2006
I am having trouble writing a simple code that will convert an int (
487 for example ) to binary form just for the sake of printing the

osmium
Guest
Posts: n/a

 02-10-2006
<(E-Mail Removed)> wrote:

>I am having trouble writing a simple code that will convert an int (
> 487 for example ) to binary form just for the sake of printing the

Use the left shift operator and the & operator. Skip the leading zeros and
when you first encounter a 1 in the leftmost position, print a char '1'.
Keep shifting and printing '1' or '0' depending on the int.

Keith Thompson
Guest
Posts: n/a

 02-10-2006
http://www.velocityreviews.com/forums/(E-Mail Removed) writes:
> I am having trouble writing a simple code that will convert an int (
> 487 for example ) to binary form just for the sake of printing the

See question 20.10 in the comp.lang.c FAQ, <http://www.c-faq.com/>.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

CBFalconer
Guest
Posts: n/a

 02-10-2006
(E-Mail Removed) wrote:
>
> I am having trouble writing a simple code that will convert an int (
> 487 for example ) to binary form just for the sake of printing the

Just this once, provided you read my sig and the URLs there
referenced.

#include <stdio.h>

/* ---------------------- */

static void putbin(unsigned int i, FILE *f) {

if (i / 2) putbin(i / 2, f);
putc((i & 1) + '0', f);
} /* putbin */

/* ---------------------- */

int main(void) {

putbin( 0, stdout); putc('\n', stdout);
putbin( 1, stdout); putc('\n', stdout);
putbin(-1, stdout); putc('\n', stdout);
putbin( 2, stdout); putc('\n', stdout);
putbin(23, stdout); putc('\n', stdout);
putbin(27, stdout); putc('\n', stdout);
return 0;
} /* main */

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the

Mike Wahler
Guest
Posts: n/a

 02-10-2006

<(E-Mail Removed)> wrote in message
news:(E-Mail Removed) oups.com...
>I am having trouble writing a simple code that will convert an int (
> 487 for example ) to binary form just for the sake of printing the

Hints:

12345 % 10 == 5
12345 / 10 == 1234

Decimal numbers have base 10
Binary numbers have base 2

An array can be traversed either forward
or backward.

-Mike

Joe Wright
Guest
Posts: n/a

 02-10-2006
(E-Mail Removed) wrote:
> I am having trouble writing a simple code that will convert an int (
> 487 for example ) to binary form just for the sake of printing the
>

We're not supposed to do it for you but I'm feeling generous..

#define CHARBITS 8
#define SHORTBITS 16
#define LONGBITS 32
#define LLONGBITS 64

typedef unsigned char uchar;
typedef unsigned short ushort;
typedef unsigned long ulong;
typedef unsigned long long ullong;

void bits(uchar b, int n) {
for (--n; n >= 0; --n)
putchar((b & 1 << n) ? '1' : '0');
putchar(' ');
}

void byte(uchar b) {
bits(b, CHARBITS);
}

void word(ushort w) {
int i;
for (i = SHORTBITS - CHARBITS; i >= 0; i -= CHARBITS)
byte(w >> i);
putchar('\n');
}

...with my compliments.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---

pete
Guest
Posts: n/a

 02-10-2006
Joe Wright wrote:
>
> (E-Mail Removed) wrote:
> > I am having trouble writing a simple code that will convert an int (
> > 487 for example ) to binary form just for the sake of printing the
> >

> We're not supposed to do it for you but I'm feeling generous..
>
> #define CHARBITS 8

> typedef unsigned char uchar;

I don't like those kind of typedefs.

> for (--n; n >= 0; --n)

My prefered way of writing that is:

while (n-- > 0)

--
pete

Joe Wright
Guest
Posts: n/a

 02-10-2006
pete wrote:
> Joe Wright wrote:
>
>>(E-Mail Removed) wrote:
>>
>>>I am having trouble writing a simple code that will convert an int (
>>>487 for example ) to binary form just for the sake of printing the
>>>

>>
>>We're not supposed to do it for you but I'm feeling generous..
>>
>>#define CHARBITS 8

>
>
> Why not use CHAR_BIT instead?
>

Why not indeed.
>
>>typedef unsigned char uchar;

>
>
> I don't like those kind of typedefs.
>

I didn't know that. Sorry. I like it.
>
>> for (--n; n >= 0; --n)

>
>
> My prefered way of writing that is:
>
> while (n-- > 0)
>

Right you are too. I'll try to do it that way from now on.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---

Keith Thompson
Guest
Posts: n/a

 02-10-2006
Joe Wright <(E-Mail Removed)> writes:
> (E-Mail Removed) wrote:
>> I am having trouble writing a simple code that will convert an int (
>> 487 for example ) to binary form just for the sake of printing the
>>

> We're not supposed to do it for you but I'm feeling generous..
>
> #define CHARBITS 8
> #define SHORTBITS 16
> #define LONGBITS 32
> #define LLONGBITS 64
>
> typedef unsigned char uchar;
> typedef unsigned short ushort;
> typedef unsigned long ulong;
> typedef unsigned long long ullong;

If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
represent the minimum guaranteed number of bits, they're reasonable,
but you really need to document your intent. If they're intended to
represent the actual sizes of the types, the values you use are wrong
on some systems. You should use CHAR_BIT from <limits.h>; for the
others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
don't mind your code breaking on a system that uses padding bits.

Typedefs like "uchar" for unsigned char are useless. Just use
"uchar" means. Likewise for the others. If you think there's some
value in saving keystrokes, use an editor that supports editor macros.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Joe Wright
Guest
Posts: n/a

 02-11-2006
Keith Thompson wrote:
> Joe Wright <(E-Mail Removed)> writes:
>
>>(E-Mail Removed) wrote:
>>
>>>I am having trouble writing a simple code that will convert an int (
>>>487 for example ) to binary form just for the sake of printing the
>>>

>>
>>We're not supposed to do it for you but I'm feeling generous..
>>
>>#define CHARBITS 8
>>#define SHORTBITS 16
>>#define LONGBITS 32
>>#define LLONGBITS 64
>>
>>typedef unsigned char uchar;
>>typedef unsigned short ushort;
>>typedef unsigned long ulong;
>>typedef unsigned long long ullong;

>
>
> If your CHARBITS, SHORTBITS, LONGBITS, and LLONGBITS are intended to
> represent the minimum guaranteed number of bits, they're reasonable,
> but you really need to document your intent. If they're intended to
> represent the actual sizes of the types, the values you use are wrong
> on some systems. You should use CHAR_BIT from <limits.h>; for the
> others, you can use constructs like "CHAR_BIT * sizeof(short)" if you
> don't mind your code breaking on a system that uses padding bits.
>

Can you give a real example of "CHAR_BIT * sizeof (short)" (16 on my Sun
and x86 boxes) breaking? Do any modern CPU architectures have padding
bits in short, int, or long objects? Which?

> Typedefs like "uchar" for unsigned char are useless. Just use
> "unsigned char" directly, so your readers don't have to guess what
> "uchar" means. Likewise for the others. If you think there's some
> value in saving keystrokes, use an editor that supports editor macros.
>

Taste? Everyone I know, seeing "uchar" in type context will read
"unsigned char". You?

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---