Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Large number libraries/algorithms

Reply
Thread Tools

Large number libraries/algorithms

 
 
James Kuyper
Guest
Posts: n/a
 
      10-24-2012
On 10/24/2012 08:23 AM, David Brown wrote:
> On 24/10/2012 13:18, James Kuyper wrote:
>> On 10/24/2012 03:01 AM, David Brown wrote:
>> ...
>>> I noticed that (on gcc 4.5 on Linux-64). There is also no int128_t or
>>> related types in <stdint.h>. And there is no way to express an int128_t
>>> literal for targets that whose "long long" is smaller than 128 bits. On
>>> x86-64, "long long" is 64-bit (with "long" being 32-bit on Windows,
>>> 64-bit on Linux), so until "long long long int" or "really long int" is
>>> added to C, the language is missing a bit of support for 128-bit integers.

>>
>> There's no need for that - that's what int128_t, int_least128_t and
>> int_fast128_t are reserved for. The language isn't missing anything,
>> it's just that particular implementation which has failed to implement
>> something which is already part of the language.
>>

>
> Yes, these type names are suitable for that purpose (though I am not
> sure they are reserved for it by C99)


If <stdint.h> is #included, C99 defines what those names mean for all N,
while only making the least and fast types mandatory for N = 8, 16, 32,
and 64. Conversely, the standard prohibits <stdint.h> from defining
those names for values of N for which the corresponding types are
unsupported.

Since those typedefs and macros do not have external linkage, but do
have file scope, the corresponding identifiers are reserved only if
<stdint.h> is #included; otherwise, they're in the name space reserved
for users, which means that an implementation is NOT free to give them
any other meaning.

> - and it is the implementation
> here that has failed to include them in <stdint.h> (I am not sure
> whether this is the responsibility of the compiler or the library).


The library can't do anything to make it work unless the compiler
supports a type of the right size; but the corresponding typedef must
not be defined unless and until the standard header is #included, so it
seems to me that they must both work together. Also, support for 128-bit
integers will probably affect [u]intmax_t, and therefore also the
corresponding functions from <stdint.h>, and the printf()/scanf() family.

> However, it is possible to express a 32-bit zero as "0", and a 64-bit
> ("long long") zero as "0LL". But there is no way to write a literal
> 128-bit zero, without extending C to allow "0LLL".


That's what the INTN_C macros are for. They're clumsy, but they do the job:

#include <stdint.c>
#ifndef INT128_MAX
#error 128 bit types not supported
#endif

int128_t i128 = INT128_C(0);

Unless int128_t is a typedef for a standard type, INT128_C() will have
to do something implementation-specific to mark it as an int128_t value.
Keeping in mind that the result must be suitable for use in #if
preprocessing directives, it can't be something like ((int128_t)0),
because, for the purposes of such directives, that would parse as
((0)0), which is a syntax error. It might very well, as an
implementation-specific extension, expand to 0LLL. However, your code
doesn't have to know about that; all it needs to know is whether or not
INT128_MAX is #defined. If so, it can use INT128_C().

> The "int128_t" and related types are enough to do most 128-bit integer
> work in C. But there are unfortunately a number of places where the C
> language and library specifications are tied to the poorly-defined
> "int", "short", "long", "long long" types rather than size-specific
> types. This includes the suffixes on literals, and the format
> specifiers in printf() (the "PRId32" style macros help enormously, but
> they are not exactly elegant - and since they expand to specifiers for
> short, int, long or long long, they can't support 128-bit integers).


There's no requirement that PRId128 expand into a specifier for short,
int, long, or long long. There is, on the other hand, a requirement that
if PRId128 is #defined, it must expand into a format specifier "suitable
for use ... when converting" int128_t. That specifier might be
implementation-specific, unless int128_t happens to be the same typedef
for a standard type, but it must exist.

> On the other hand, I don't see how this would limit practical 128-bit
> usage much. After all, how often do you need to write 128-bit literals,
> or printf them out?


I'm sure that if I needed to work with 128-bit data, I would need to
write them and to printf() them out. Luckily, as explained above, that's
not a problem.

 
Reply With Quote
 
 
 
 
Noob
Guest
Posts: n/a
 
      10-24-2012
Noob wrote:

> I was under the impression that there existed an obscure GCC-specific
> syntax to define 128-bit integer types:
>
> typedef unsigned int myUI64 __attribute__((mode(TI)));


Errr, I meant myUI128.

And IIUC, one may also define a 256-bit integer type with

typedef unsigned int myUI256 __attribute__((mode(OI)));

sizeof(myUI256) is supposed to yield 32, which seems to imply(?)
that these type definitions fall apart when CHAR_BIT != 8

Regards.

 
Reply With Quote
 
 
 
 
Keith Thompson
Guest
Posts: n/a
 
      10-24-2012
Ben Bacarisse <(E-Mail Removed)> writes:
[...]
> I commented on this a couple of days ago. I image the fact that
> __int128 is in the compiler but int128_t is not it stdint.h is simply
> the desire to keep the number of versions of stdint.h to a minimum.
> Since C99 requires at least a 64-but int, limiting yourself to that
> length keeps stdint.h very simple. However, I don't think gcc has the
> mechanisms in place to make intmax_t be the 128-bit type (see below) and
> that may be the real reason stdint.h stop st 64-bit types.


If gcc provided an implementation-defined macro that indicates
whether __int128 is available, then <stdint.h> could use that macro
to decide whether to define int128_t and friends, and how to define
intmax_t. (<stdint.h> is provided by glibc, or by whatever C library
is used on a given system -- I don't know whether it uses fixincludes
to generate a compatible <stdint.h> on non-glibc systems).

Incidentally, gcc's syntax for its 128-bit unsigned type is

unsigned __int128

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      10-24-2012
Noob <root@localhost> writes:
> Noob wrote:
>> I was under the impression that there existed an obscure GCC-specific
>> syntax to define 128-bit integer types:
>>
>> typedef unsigned int myUI64 __attribute__((mode(TI)));

>
> Errr, I meant myUI128.
>
> And IIUC, one may also define a 256-bit integer type with
>
> typedef unsigned int myUI256 __attribute__((mode(OI)));


Yes, that seems to work -- but since neither "mode(TI)" nor "mode(OI)"
appears in the gcc documentation, I wouldn't count on them being
supported. For the latter, I get "error: unable to emulate 'OI'".

> sizeof(myUI256) is supposed to yield 32, which seems to imply(?)
> that these type definitions fall apart when CHAR_BIT != 8


I don't think gcc supports CHAR_BIT!=8.

--
Keith Thompson (The_Other_Keith) (E-Mail Removed) <http://www.ghoti.net/~kst>
Will write code for food.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
Ben Bacarisse
Guest
Posts: n/a
 
      10-24-2012
Keith Thompson <(E-Mail Removed)> writes:

> Ben Bacarisse <(E-Mail Removed)> writes:
> [...]
>> I commented on this a couple of days ago. I image the fact that
>> __int128 is in the compiler but int128_t is not it stdint.h is simply
>> the desire to keep the number of versions of stdint.h to a minimum.
>> Since C99 requires at least a 64-but int, limiting yourself to that
>> length keeps stdint.h very simple. However, I don't think gcc has the
>> mechanisms in place to make intmax_t be the 128-bit type (see below) and
>> that may be the real reason stdint.h stop st 64-bit types.

>
> If gcc provided an implementation-defined macro that indicates
> whether __int128 is available, then <stdint.h> could use that macro
> to decide whether to define int128_t and friends, and how to define
> intmax_t.


Yes, that's the easy part! When I said that gcc does not have the
mechanisms in place I was referring to ability to write a literal with
the right type. If stdint.h provides access to this 128-bit type it
must also provide INT128_C and that's probably a gcc issue.

<snip>
--
Ben.
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
median of large data set (from large file) friend.05@gmail.com Perl Misc 5 04-02-2009 04:06 AM
OT: Number Nine, Number Nine, Number Nine FrisbeeŽ MCSE 37 09-26-2005 04:06 PM
how to in INSTANTIATING large number of components? nospam VHDL 4 06-06-2005 06:47 AM
[Urgent] Is there a size limit on returning a large dataset or a large typed array from web service? Ketchup ASP .Net Web Services 1 05-25-2004 10:11 AM
Backing Up Large Files..Or A Large Amount Of Files Scott D. Weber For Unuathorized Thoughts Inc. Computer Support 1 09-19-2003 07:28 PM



Advertisments