Velocity Reviews (http://www.velocityreviews.com/forums/index.php)
-   C Programming (http://www.velocityreviews.com/forums/f42-c-programming.html)
-   -   #define BYTES *8 (http://www.velocityreviews.com/forums/t951477-define-bytes-8-a.html)

 JohnF 08-28-2012 07:44 AM

#define BYTES *8

Anything wrong with that, i.e., with #define BYTES *8
to multiply by 8? It looked a little weird to me, but
the more obvious #define BYTES(x) ((x)*8) isn't what
I wanted to write. This was to express units of measurement,
e.g., int bits = so_many BYTES; rather than
int bits=BYTES(so_many); . I just wanted to read and write
it the first way, and my test program
#define BYTES *8
#include <stdio.h>
int main ( int argc, char *argv[] ) {
int bytes=(argc<2?1:atoi(argv[1])),
bits = bytes BYTES;
printf("%d bytes = %d bits\n",bytes,bits);
return(0); }
works fine (and compiles with no -pedantic warnings).
But that #define BYTES *8 still looks a little funky to me.
I realize 2+2 BYTES must be written (2+2)BYTES, but is there
any other kind of "gotcha"?
--
John Forkosh ( mailto: j@f.com where j=john and f=forkosh )

 Mark Bluemel 08-28-2012 08:07 AM

Re: #define BYTES *8

On 28/08/2012 08:44, JohnF wrote:
> Anything wrong with that, i.e., with #define BYTES *8
> to multiply by 8? It looked a little weird to me, but
> the more obvious #define BYTES(x) ((x)*8) isn't what
> I wanted to write. This was to express units of measurement,
> e.g., int bits = so_many BYTES; rather than
> int bits=BYTES(so_many); . I just wanted to read and write
> it the first way, and my test program
> #define BYTES *8
> #include <stdio.h>
> int main ( int argc, char *argv[] ) {
> int bytes=(argc<2?1:atoi(argv[1])),
> bits = bytes BYTES;
> printf("%d bytes = %d bits\n",bytes,bits);
> return(0); }
> works fine (and compiles with no -pedantic warnings).
> But that #define BYTES *8 still looks a little funky to me.
> I realize 2+2 BYTES must be written (2+2)BYTES, but is there
> any other kind of "gotcha"?
>

I hate the name - it doesn't actually convey the intent to my mind.

 Keith Thompson 08-28-2012 08:38 AM

Re: #define BYTES *8

> Anything wrong with that, i.e., with #define BYTES *8
> to multiply by 8? It looked a little weird to me, but
> the more obvious #define BYTES(x) ((x)*8) isn't what
> I wanted to write. This was to express units of measurement,
> e.g., int bits = so_many BYTES; rather than
> int bits=BYTES(so_many); . I just wanted to read and write
> it the first way, and my test program
> #define BYTES *8
> #include <stdio.h>
> int main ( int argc, char *argv[] ) {
> int bytes=(argc<2?1:atoi(argv[1])),
> bits = bytes BYTES;
> printf("%d bytes = %d bits\n",bytes,bits);
> return(0); }
> works fine (and compiles with no -pedantic warnings).
> But that #define BYTES *8 still looks a little funky to me.
> I realize 2+2 BYTES must be written (2+2)BYTES, but is there
> any other kind of "gotcha"?

I'd say it violates the principle of least surprise. Seeing an
identifer used that way, effectively as a postfix unary operator, is
confusing; it's very difficult to understand unless you examine the
definition of the macro.

Most macros can be understood, in terms of their meaning if not their
implementation, by looking at their names and arguments.

Something similar: Years ago, I thought that this:

#define EVER (;;)

was clever, because I could write:

for EVER {
/* ... */
}

I still think it's clever. I just no longer think that's a good thing.

And as Mark Bluemel indicates, the name doesn't convey the meaning well.
It says that you're working with bytes somehow, but it doesn't imply
anything about a conversion to bits.

If you want to compute the number of bits in a given number of bytes,
and you're using the word "byte" as it's defined by the C standard, then
you need to multiply by CHAR_BIT, defined in <limits.h>:

bits = bytes * CHAR_BIT;

That's clear enough that no additional macro is needed.

--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

 JohnF 08-28-2012 09:21 AM

Re: #define BYTES *8

Keith Thompson <kst-u@mib.org> wrote:
>> Anything wrong with that, i.e., with #define BYTES *8
>> to multiply by 8? It looked a little weird to me, but
>> the more obvious #define BYTES(x) ((x)*8) isn't what
>> I wanted to write. This was to express units of measurement,
>> e.g., int bits = so_many BYTES; rather than
>> int bits=BYTES(so_many); . I just wanted to read and write
>> it the first way, and my test program
>> #define BYTES *8
>> #include <stdio.h>
>> int main ( int argc, char *argv[] ) {
>> int bytes=(argc<2?1:atoi(argv[1])),
>> bits = bytes BYTES;
>> printf("%d bytes = %d bits\n",bytes,bits);
>> return(0); }
>> works fine (and compiles with no -pedantic warnings).
>> But that #define BYTES *8 still looks a little funky to me.
>> I realize 2+2 BYTES must be written (2+2)BYTES, but is there
>> any other kind of "gotcha"?

>
> I'd say it violates the principle of least surprise. [...]
> [...] If you want to compute the number of bits in a given
> number of bytes,
> bits = bytes * CHAR_BIT;
> That's clear enough that no additional macro is needed.

My bad. The actual app involves physical units. I just used
bits/bytes as an example for the post. Moreover, the actual
program is user-modifiable at compile time by way of an
#include "userformulas.h" in the middle of the code (not as
a header) containing stuff like length=10 feet; width=3 inches;
height=4 meters; etc; (And please don't tell me that
should be input as data. The alternatives were considered
and the decision's been made.) So you can see why I want
to do this. And the aspect of the semantics you're talking
is about any straightforward semantic problems arising from
the funky-looking syntactic construction.
--
John Forkosh ( mailto: j@f.com where j=john and f=forkosh )

 BartC 08-28-2012 11:34 AM

Re: #define BYTES *8

> My bad. The actual app involves physical units. I just used
> bits/bytes as an example for the post. Moreover, the actual
> program is user-modifiable at compile time by way of an
> #include "userformulas.h" in the middle of the code (not as
> a header) containing stuff like length=10 feet; width=3 inches;
> height=4 meters; etc;

I use the same thing elsewhere:

x = 3m+50cm

so x ends up as 3500 when the basic unit is mm (m scales by 1000, and cm by
10).

So it sounds reasonable enough and using #define as you have sounds like the
closest you might get in C source code.

However:

o I only use such a suffix after a constant
o It only applies to the immediately preceding constant

So your 2+2 BYTES example elsewhere would need to be written as 2 BYTES + 2
BYTES, which makes more sense anyway.

> The only question I'm asking
> is about any straightforward semantic problems arising from
> the funky-looking syntactic construction.

Without proper language support, your macro could be used in ways that were
not intended (2 BYTES BYTES BYTES for example).

--
Bartc

 JohnF 08-28-2012 12:23 PM

Re: #define BYTES *8

BartC <bc@freeuk.com> wrote:
>> stuff like length=10 feet; width=3 inches;
>> height=4 meters; etc;

>
> I use the same thing elsewhere: x = 3m+50cm
> so x ends up as 3500 when the basic unit is mm
> (m scales by 1000, and cm by 10).
> So it sounds reasonable enough and using #define as you have
> sounds like the closest you might get in C source code.

Thanks, Bart. I figured it would work because I tested it
and it worked. But I still wanted to double-check becuase
it looked strange, and I'd never used that kind of #define
construction before, nor seen similar stuff used.
In fact, full disclosure, it would never have occurred
to me except I saw /mm {2.834646 mul} def in some
postscript code (who knew there were 2.83 seventy-seconds
of an inch in a mm?).

> However:
> o I only use such a suffix after a constant
> o It only applies to the immediately preceding constant
> So your 2+2 BYTES example elsewhere would need to be written
> as 2 BYTES + 2 BYTES, which makes more sense anyway.
>
>> The only question I'm asking
>> is about any straightforward semantic problems arising from
>> the funky-looking syntactic construction.

>
> Without proper language support, your macro could be used
> in ways that were not intended (2 BYTES BYTES BYTES for example).

--
John Forkosh ( mailto: j@f.com where j=john and f=forkosh )

 James Kuyper 08-28-2012 01:54 PM

Re: #define BYTES *8

On 08/28/2012 03:44 AM, JohnF wrote:
> Anything wrong with that, i.e., with #define BYTES *8
> to multiply by 8? It looked a little weird to me, but
> the more obvious #define BYTES(x) ((x)*8) isn't what
> I wanted to write. This was to express units of measurement,
> e.g., int bits = so_many BYTES; rather than
> int bits=BYTES(so_many); . I just wanted to read and write
> it the first way, and my test program
> #define BYTES *8
> #include <stdio.h>
> int main ( int argc, char *argv[] ) {
> int bytes=(argc<2?1:atoi(argv[1])),
> bits = bytes BYTES;
> printf("%d bytes = %d bits\n",bytes,bits);
> return(0); }
> works fine (and compiles with no -pedantic warnings).
> But that #define BYTES *8 still looks a little funky to me.
> I realize 2+2 BYTES must be written (2+2)BYTES, but is there
> any other kind of "gotcha"?

It's a bad idea because it works in a way very different from C
identifiers that are not macros. The best uses of macros always mimic
other kinds of identifiers syntactically. However, there's another issue
as well, even if we accept the the way you want to use this macro, it's
defined wrong. It should, for the sake of portability, be

#include <limits.h>
#define BYTES *CHAR_BIT
--
James Kuyper

 tom st denis 08-28-2012 02:31 PM

Re: #define BYTES *8

On Aug 28, 3:44*am, JohnF <j...@please.see.sig.for.email.com> wrote:
> Anything wrong with that, i.e., with *#define BYTES *8
> to multiply by 8? It looked a little weird to me, but
> the more obvious #define BYTES(x) ((x)*8) isn't what

As others pointed out the name is a bit confusing as "bytes" is not
what it counts but bits.

But more so it's probably better to have something like

#define OBJ_TO_BITS(x) (sizeof(x) * CHAR_BIT)

Which is both named correctly, uses the correct bit definition, and
doesn't have unintended side effects.

Tom

 BartC 08-28-2012 04:44 PM

Re: #define BYTES *8

"Scott Fluhrer" <sfluhrer@ix.netcom.com> wrote in message
news:1346170970.561767@rcdn-nntpcache-2...
>
> "James Kuyper" <jameskuyper@verizon.net> wrote in message
> news:k1iiib\$jqi\$1@dont-email.me...
>> On 08/28/2012 03:44 AM, JohnF wrote:
>>> #define BYTES *8

>>
>> However, there's another issue
>> as well, even if we accept the the way you want to use this macro, it's
>> defined wrong. It should, for the sake of portability, be
>>
>> #include <limits.h>
>> #define BYTES *CHAR_BIT

>
> Whether this last part is good advice depends on what the original
> programmer is trying to do.

Apparently he wants to apply a scale factor to a quantity, by using the
macro as a suffix to the quantity.

--
bartc

 ImpalerCore 08-28-2012 05:17 PM

Re: #define BYTES *8

On Aug 28, 10:31*am, tom st denis <t...@iahu.ca> wrote:
> On Aug 28, 3:44*am, JohnF <j...@please.see.sig.for.email.com> wrote:
>
> > Anything wrong with that, i.e., with *#define BYTES *8
> > to multiply by 8? It looked a little weird to me, but
> > the more obvious #define BYTES(x) ((x)*8) isn't what

>
> As others pointed out the name is a bit confusing as "bytes" is not
> what it counts but bits.
>
> But more so it's probably better to have something like
>
> #define OBJ_TO_BITS(x) (sizeof(x) * CHAR_BIT)
>
> Which is both named correctly, uses the correct bit definition, and
> doesn't have unintended side effects.

I prefer 'c_sizeof_bits' (the 'c_' is the prefix I use for my stuff),
which helps remind me that the macro is an extension of the sizeof
operator. I use 'c_sizeof_array' for similar reasons. I was tempted
to use all caps, but I wanted to adopt the same case-style as 'sizeof'
for these two macros.

Best regards,
John D.

All times are GMT. The time now is 10:23 AM.