Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > binary format of the number.

Reply
Thread Tools

binary format of the number.

 
 
LR
Guest
Posts: n/a
 
      12-02-2008
blargg wrote:

> Excuse my preference for the preprocessor


I think this might be a good place for the preprocessor, although the
usual caveats probably apply. I approached this from a slightly
different perspective.

I wonder if something like this might satisfy the OP:

#include <iostream>
#include <limits>
#include <bitset>

static const unsigned int DigitsInAnUnsignedLong =
std::numeric_limits<unsigned long>::digits;
typedef std::bitset<DigitsInAnUnsignedLong> BinaryType;


// this uses an explicit ctor
#define Binary(Z) BinaryType(std::string(#Z)).to_ulong()

int main() {
std::cout << Binary(11) << std::endl;
}

I don't particularly think that 'Binary' is a good name for a define,
but that's something to be careful about anyway, and this is just an
example, more a point of departure than a solution.

There might be some portability issues, and issues if you need signed
types, but maybe that can be taken care of with some abstraction.

LR
 
Reply With Quote
 
 
 
 
Kai-Uwe Bux
Guest
Posts: n/a
 
      12-02-2008
LR wrote:

> blargg wrote:
>
>> Excuse my preference for the preprocessor

>
> I think this might be a good place for the preprocessor, although the
> usual caveats probably apply. I approached this from a slightly
> different perspective.
>
> I wonder if something like this might satisfy the OP:
>
> #include <iostream>
> #include <limits>
> #include <bitset>
>
> static const unsigned int DigitsInAnUnsignedLong =
> std::numeric_limits<unsigned long>::digits;
> typedef std::bitset<DigitsInAnUnsignedLong> BinaryType;
>
>
> // this uses an explicit ctor
> #define Binary(Z) BinaryType(std::string(#Z)).to_ulong()
>
> int main() {
> std::cout << Binary(11) << std::endl;
> }
>
> I don't particularly think that 'Binary' is a good name for a define,
> but that's something to be careful about anyway, and this is just an
> example, more a point of departure than a solution.
>
> There might be some portability issues, and issues if you need signed
> types, but maybe that can be taken care of with some abstraction.


There also might be the issue that with this solution, the binary constants
cannot be used at compile-time, i.e.,

template < unsigned long n >
struct compile_time {};

int main() {
compile_time< Binary(11) > x;
}

would not compile. The snipped solution, however, deals with that.


Best

Kai-Uwe Bux
 
Reply With Quote
 
 
 
 
Kai-Uwe Bux
Guest
Posts: n/a
 
      12-02-2008
blargg wrote:

> Kai-Uwe Bux wrote:
>> LR wrote:
>> > blargg wrote:
>> >> Excuse my preference for the preprocessor
>> >
>> > I think this might be a good place for the preprocessor, although the
>> > usual caveats probably apply. I approached this from a slightly
>> > different perspective.
>> >
>> > I wonder if something like this might satisfy the OP:

> [...]
>> > // this uses an explicit ctor
>> > #define Binary(Z) BinaryType(std::string(#Z)).to_ulong()

> [...]
>> There also might be the issue that with this solution, the binary
>> constants cannot be used at compile-time, i.e.,
>>
>> template < unsigned long n >
>> struct compile_time {};
>>
>> int main() {
>> compile_time< Binary(11) > x;
>> }
>>
>> would not compile. The snipped solution, however, deals with that.

>
> And optimizes out entirely. That is,
> BIN32(10101010,11001001,00110101,01011001) used in an expression should
> generate the EXACT same code as 0xAAC93559 used in its place. A template
> meta-programmed approach could do the same, but I cringe at its
> complexity.


Well, the proposed solution has some remarkable trickyness, too. The
following is not as cute, but in my eyes easier to understand:

#define BIN_(n) ( ( n & 1) |\
( n>>2 & 2 ) |\
( n>>4 & 4 ) |\
( n>>6 & 8 ) |\
( n>>8 & 16 ) |\
( n>>10 & 32 ) |\
( n>>12 & 64 ) |\
( n>>14 & 128 ) |\
( n >> 16 & 256 ) )
#define BIN8(a) ( BIN_(0##a) * BIN8_VALID_(a) )

(the rest is as upthread.)


Best

Kai-Uwe Bux
 
Reply With Quote
 
Tarmo Kuuse
Guest
Posts: n/a
 
      12-02-2008
blargg wrote:
> So you want it to support grouped binary values? What would be the format,
> something like this?
>
> 0b00000011_11000000_00000000_00000000


That's a good solution. The underscore is widely used for grouping in
specs and documents. I think most programmers would recognize this
presentation of binary immediately.

Dreaming is nice

>> Yes, slowly the brain rewires itself to natively process the hexadecimal
>> system. Until it does, however, bugs run rampant.

>
> Whenever I deal with hardware or other systems using bitmasks, I define
> the mask constants once, then use bitwise operators to combine them. And
> when defining them, I'd use 1 shifted left by the bit number, not a hex
> constant.


Not everybody does this. Each time I modify headers that define bits in
hex, it feels like a grain of sand is in my shoe.

> Excuse my preference for the preprocessor (partly so it works in C and
> C++). I convert the literals to octal in the macros, and also verify that
> they contain 0 to 8 bits each, and nothing besides 0 or 1. You don't have
> to pad constants to 8 bits, so for example BIN16(10,1) == 0x201.
>
> #define BIN8_VALID_(a) (sizeof (char [(01##a & 0111111111) == 01##a]))
> #define BIN8_(n) ((n >> 14 & 0xE0) | (n >> 8 & 0x1 | (n >> 4 & 0x07))
> #define BIN8(a) (BIN8_( ((16 + 4 + 1) * (0##a)) ) * BIN8_VALID_( a ))
> #define BIN16(a,b) (BIN8(a)*0x100u + BIN8(b))
> #define BIN32(a,b,c,d) (BIN16(a,b)*0x10000u + BIN16(c,d))
>
> [snip]


Very nifty. Preprocessor is fine - C is unavoidable in this area.

So, gcc is able to evaluate this compile time?

--
Kind regards,
Tarmo
 
Reply With Quote
 
Kai-Uwe Bux
Guest
Posts: n/a
 
      12-02-2008
Tarmo Kuuse wrote:

> blargg wrote:

[snip]
>> #define BIN8_VALID_(a) (sizeof (char [(01##a & 0111111111) == 01##a]))
>> #define BIN8_(n) ((n >> 14 & 0xE0) | (n >> 8 & 0x1 | (n >> 4 &
>> #0x07))
>> #define BIN8(a) (BIN8_( ((16 + 4 + 1) * (0##a)) ) * BIN8_VALID_(
>> #a ))
>> #define BIN16(a,b) (BIN8(a)*0x100u + BIN8(b))
>> #define BIN32(a,b,c,d) (BIN16(a,b)*0x10000u + BIN16(c,d))
>>
>> [snip]

>
> Very nifty. Preprocessor is fine - C is unavoidable in this area.
>
> So, gcc is able to evaluate this compile time?


Any compliant compiler should. The tricky bits are

(a) that the result is correct.
(b) that no intermediate value overflows the bounds for unsigned constant
expressions (guaranteed to be >= 2^32).


Best

Kai-Uwe Bux
 
Reply With Quote
 
Gennaro Prota
Guest
Posts: n/a
 
      12-02-2008
Tarmo Kuuse wrote:
> blargg wrote:
>> So you want it to support grouped binary values? What would be the
>> format,
>> something like this?
>>
>> 0b00000011_11000000_00000000_00000000

>
> That's a good solution. The underscore is widely used for grouping in
> specs and documents. I think most programmers would recognize this
> presentation of binary immediately.
>
> Dreaming is nice


Excuse me guys, do my posts appear on your news server? I'm
asking because I mentioned n0259 and its fate, about three days
ago. Am I properly plugged into the Usenet thing?

--
Gennaro Prota | name.surname yahoo.com
Breeze C++ (preview): <https://sourceforge.net/projects/breeze/>
Do you need expertise in C++? I'm available.
 
Reply With Quote
 
Tarmo Kuuse
Guest
Posts: n/a
 
      12-02-2008
Gennaro Prota wrote:
> Excuse me guys, do my posts appear on your news server? I'm
> asking because I mentioned n0259 and its fate, about three days
> ago. Am I properly plugged into the Usenet thing?


Yes, your post is visible. There are some follow-up messages from James
Kanze and others.

--
Kind regards,
Tarmo

 
Reply With Quote
 
Kai-Uwe Bux
Guest
Posts: n/a
 
      12-02-2008
Hendrik Schober wrote:

> blargg wrote:
>> [...]
>> And optimizes out entirely. That is,
>> BIN32(10101010,11001001,00110101,01011001) used in an expression should
>> generate the EXACT same code as 0xAAC93559 used in its place. A template
>> meta-programmed approach could do the same, but I cringe at its
>> complexity.

>
> Mhmm. I just toyed with the idea a bit and it doesn't seem that
> hard. Here's something to start with:
>
> template< unsigned long BinNum >
> struct const_bin {
> static const unsigned long result = const_bin<BinNum/10>::result *
> 2
> + BinNum % 10;
> };
>
> template<>
> struct const_bin<0> {
> static const unsigned long result = 0;
> };
>
> This lacks a check for nonsensical input ('const_bin<3>::result'
> compiles just fine) and support for multiple template arguments.
> But unless I'm missing something (I usually do) both should be
> rather easy to add.


Did you try

const_bin< 00010001 >::result


Best

Kai-Uwe Bux
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
(8-bit binary to two digit bcd) or (8-bit binary to two digit seven segment) Fangs VHDL 3 10-26-2008 06:41 AM
binary number format ? format character %b or similar. Ken Starks Python 4 06-23-2008 08:59 AM
writing binary file (ios::binary) Ron Eggler C++ 9 04-28-2008 08:20 AM
A 64-bit binary returning a value to a 32-bit binary? spammenotplui31@yahoo.ca C Programming 12 04-08-2007 07:02 AM
Re: ostreams, ios::binary, endian, mixed binary-ascii Marc Schellens C++ 8 07-15-2003 12:27 PM



Advertisments