Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Fast way to add null after each char

Reply
Thread Tools

Fast way to add null after each char

 
 
James Kanze
Guest
Posts: n/a
 
      09-08-2010
On Sep 7, 7:52 pm, "Alf P. Steinbach /Usenet"
<(E-Mail Removed)> wrote:
> * Brad, on 07.09.2010 14:45:


[...]
> First, in Windows, conversion to wstring like


> string s = "blah blah";
> wstring u( s.begin(), s.end() )


> produces exactly the byte sequence that you're laboriously creating.


Only if you compile with the /J option. By default, char is
signed, and accented characters will result in incorrect
Unicode. (You could use boost::transform_iterator to convert
the char to unsigned char.)

This isn't a Windows issue: the standard requires the above to
work, *and* defines the semantics. (Semantics which are, in
general, useless.)

--
James Kanze
 
Reply With Quote
 
 
 
 
Stuart Golodetz
Guest
Posts: n/a
 
      09-08-2010
Francesco S. Carta wrote:
> Stuart Golodetz <(E-Mail Removed)>, on 07/09/2010 23:08:37, wrote:
>
>> Francesco S. Carta wrote:
>>> tni <(E-Mail Removed)>, on 06/09/2010 14:33:13, wrote:
>>>
>>>> On 2010-09-06 12:38, Francesco S. Carta wrote:
>>>>> Goran Pusic <(E-Mail Removed)>, on 06/09/2010 03:15:21, wrote:
>>>>>
>>>>>> Guys, aren't you a bit misleading with iterators and big-O and
>>>>>> stuff?
>>>>>
>>>>> My bad. I intentionally posted a wrong suggestion without clearly
>>>>> marking it as such - I thought I was going to be castigated
>>>>> immediately,
>>>>> but since the punishment didn't come at once, I kept it on to see what
>>>>> was going to happen... now I realize that it wasn't all that fun for
>>>>> the
>>>>> others, so I present my apologies to the group for the wasted time.
>>>>
>>>> The sad thing is, I've seen quadratic stuff like that far too often in
>>>> production code. There certainly are more than enough (clueless) people
>>>> who wouldn't consider it a joke.
>>>
>>> Indeed. And since it seems that most of those programmers has no idea
>>> of what the O() notation is, simply saying that an algorithm has
>>> linear or quadratic complexity is not enough to make the point clear.
>>>
>>> Unfortunately it needs more words (and practical examples as the one
>>> you posted in the other branch) and a big fat "quadratic is BAD" sign
>>> three meters on each side, but it's worth the effort.

>>
>> You're oversimplifying
>>
>> It's not that quadratic algorithms per se are necessarily bad -- for
>> some problems (matrix multiplication, for example), a quadratic solution
>> may in fact be optimal. The point is that you want to avoid implementing
>> solutions that are inadequate for the problem at hand (whatever their
>> actual complexity).

>
> Indeed, I was unconsciously implying the problem (and the context) at
> hand, where anything more than linear complexity would be sub-optimal,
> thank you for the refinement.


Indeed -- for what it's worth, I only meant to nitpick you in a friendly
way, hence the

Incidentally, I think it's worth observing that the algorithm with the
optimal Big-O complexity may not necessarily always be the optimal
solution to your problem. For that matter, the algorithm with the
optimal *actual* complexity (i.e. including the constant factors) may
not necessarily always be the optimal solution to your problem -- if
it's so damn hard to implement that you could have solved the problem
with the worse algorithm by the time you implemented the better one,
then you should implement the worse one and feel no guilt about it
afterwards. YMMV.

> Of course, an algorithm solving the TSP with quadratic complexity would
> be very very good - no idea if something like that is even conceivable,
> I'm not a mathematician, and for that very reason, I must take your word
> about quadratic being optimal in matrix multiplication !-)


If you came up with a quadratic complexity algorithm for TSP (which is
NP-hard), wouldn't you have just come up with a constructive proof that
P = NP? In which case that would indeed be "very very good" I think
you'd probably make yourself quite a bit of money... I guess it's
conceivably possible, but unlikely in practice. (As an article of
personal faith, I strongly suspect P != NP, but maybe one day we'll know
for sure.)

In terms of matrix multiplication being at least quadratic, consider
that you have to at least read all the elements of both input matrices.
I'm not a mathematician either, btw

Cheers,
Stu
 
Reply With Quote
 
 
 
 
Stuart Golodetz
Guest
Posts: n/a
 
      09-14-2010
Joshua Maurice wrote:
> On Sep 8, 3:54 am, Stuart Golodetz <(E-Mail Removed)> wrote:
>> Joshua Maurice wrote:
>>> On Sep 7, 3:08 pm, Stuart Golodetz <(E-Mail Removed)> wrote:
>>>> Francesco S. Carta wrote:
>>>>> tni <(E-Mail Removed)>, on 06/09/2010 14:33:13, wrote:
>>>>>> On 2010-09-06 12:38, Francesco S. Carta wrote:
>>>>>>> Goran Pusic <(E-Mail Removed)>, on 06/09/2010 03:15:21, wrote:
>>>>>>>> Guys, aren't you a bit misleading with iterators and big-O and
>>>>>>>> stuff?
>>>>>>> My bad. I intentionally posted a wrong suggestion without clearly
>>>>>>> marking it as such - I thought I was going to be castigated immediately,
>>>>>>> but since the punishment didn't come at once, I kept it on to see what
>>>>>>> was going to happen... now I realize that it wasn't all that fun for the
>>>>>>> others, so I present my apologies to the group for the wasted time.
>>>>>> The sad thing is, I've seen quadratic stuff like that far too often in
>>>>>> production code. There certainly are more than enough (clueless) people
>>>>>> who wouldn't consider it a joke.
>>>>> Indeed. And since it seems that most of those programmers has no idea of
>>>>> what the O() notation is, simply saying that an algorithm has linear or
>>>>> quadratic complexity is not enough to make the point clear.
>>>>> Unfortunately it needs more words (and practical examples as the one you
>>>>> posted in the other branch) and a big fat "quadratic is BAD" sign three
>>>>> meters on each side, but it's worth the effort.
>>>> You're oversimplifying
>>>> It's not that quadratic algorithms per se are necessarily bad -- for
>>>> some problems (matrix multiplication, for example), a quadratic solution
>>>> may in fact be optimal. The point is that you want to avoid implementing
>>>> solutions that are inadequate for the problem at hand (whatever their
>>>> actual complexity).
>>> I assume you meant to say either that
>>> 1- O(n^2) is an optimal bound on the number of cell additions, for
>>> doing a matrix add of two two-dimensional matrices, both of size (n,
>>> n). In this case, you are correct.
>>> 2- O(n^3) is an optimal bound on the number of cell multiplications,
>>> for doing a matrix multiply of two two-dimensional matrices, both of
>>> size (n, n). Even assuming this interpretation, you are incorrect. You
>>> can do better than O(n^3), though it's not the easiest thing to
>>> program:
>>> http://en.wikipedia.org/wiki/Matrix_...rithms_for_eff...

>> What I meant by that was that matrix multiplication of two n x n
>> matrices is Omega(n^2) -- i.e. you can't do better than a solution of
>> quadratic complexity (since the output matrix depends on all the
>> elements of the two input matrices, you have to at least read all of
>> them, which instantly makes it quadratic). Thus a quadratic solution, if
>> one were possible, would be optimal. Actually, I'm not sure that there
>> isn't a tighter lower bound than that for matrix multiplication, in
>> which case a quadratic solution wouldn't actually be possible -- but I
>> was using the example to explain what I meant (I think I failed in that,
>> incidentally )

>
> Sorry. I was trying to ferret out exactly what you meant in a nice
> way.


No offence taken I should have been more clear.

>> What I wasn't saying was anything about the Big-O upper bound for either
>> matrix addition or multiplication. I'm aware that you can beat O(n^3)
>> for matrix multiplication -- e.g. Strassen's algorithm is O(n^2.807).
>> I've never actually sat down and implemented it, though, because I've
>> never personally needed it.

>
> Indeed. My favorite way of looking at this is choice trees. I'm
> fudging the formalism a little bit now, but we know that the output
> matrix depends on every cell in the input matrices, so we know that
> the code has to at least read each input cell. Given the instruction
> set can only read a fixed finite number of cells in an instruction
> (naively only 1 or 2), then we know that O(n^2) instructions are
> required to do the matrix multiplication. The bound of O(n^2) is thus
> a lower bound, though we don't know if any algorithm exists which can
> do it in that bound from this proof of choice trees.


I'm not totally sure what the choice tree stuff is in this context
actually -- could you elaborate?

For what it's worth, btw, I think you mean Omega(n^2) and not O(n^2) --
since as you (correctly) state it's a lower bound:

http://en.wikipedia.org/wiki/Big_O_n...ndau_notations

But that's nitpicking on my part (and I'm far from being an expert on
complexity analysis, so I should perhaps refrain from it!).

> Also, as mentioned in the wiki link, the best we've been able to do
> (for the given instruction set) is about n ^ ((log base 2) of (7)).


Apparently there are asymptotically better algorithms that have been
discovered, e.g.

http://en.wikipedia.org/wiki/Coppers...grad_algorithm

However, this one (at least) isn't used in practice, because you have to
have really large matrices to make it worthwhile, and current hardware
can't handle matrices of the size required.

Cheers,
Stu
 
Reply With Quote
 
Joshua Maurice
Guest
Posts: n/a
 
      09-14-2010
On Sep 14, 7:58*am, Stuart Golodetz <(E-Mail Removed)> wrote:
> Joshua Maurice wrote:
> > On Sep 8, 3:54 am, Stuart Golodetz <(E-Mail Removed)> wrote:
> >> Joshua Maurice wrote:
> >>> On Sep 7, 3:08 pm, Stuart Golodetz <(E-Mail Removed)> wrote:
> >>>> Francesco S. Carta wrote:
> >>>>> tni <(E-Mail Removed)>, on 06/09/2010 14:33:13, wrote:
> >>>>>> On 2010-09-06 12:38, Francesco S. Carta wrote:
> >>>>>>> Goran Pusic <(E-Mail Removed)>, on 06/09/2010 03:15:21, wrote:
> >>>>>>>> Guys, aren't you a bit misleading with iterators and big-O and
> >>>>>>>> stuff?
> >>>>>>> My bad. I intentionally posted a wrong suggestion without clearly
> >>>>>>> marking it as such - I thought I was going to be castigated immediately,
> >>>>>>> but since the punishment didn't come at once, I kept it on to see what
> >>>>>>> was going to happen... now I realize that it wasn't all that fun for the
> >>>>>>> others, so I present my apologies to the group for the wasted time.
> >>>>>> The sad thing is, I've seen quadratic stuff like that far too often in
> >>>>>> production code. There certainly are more than enough (clueless) people
> >>>>>> who wouldn't consider it a joke.
> >>>>> Indeed. And since it seems that most of those programmers has no idea of
> >>>>> what the O() notation is, simply saying that an algorithm has linear or
> >>>>> quadratic complexity is not enough to make the point clear.
> >>>>> Unfortunately it needs more words (and practical examples as the one you
> >>>>> posted in the other branch) and a big fat "quadratic is BAD" sign three
> >>>>> meters on each side, but it's worth the effort.
> >>>> You're oversimplifying
> >>>> It's not that quadratic algorithms per se are necessarily bad -- for
> >>>> some problems (matrix multiplication, for example), a quadratic solution
> >>>> may in fact be optimal. The point is that you want to avoid implementing
> >>>> solutions that are inadequate for the problem at hand (whatever their
> >>>> actual complexity).
> >>> I assume you meant to say either that
> >>> 1- O(n^2) is an optimal bound on the number of cell additions, for
> >>> doing a matrix add of two two-dimensional matrices, both of size (n,
> >>> n). In this case, you are correct.
> >>> 2- O(n^3) is an optimal bound on the number of cell multiplications,
> >>> for doing a matrix multiply of two two-dimensional matrices, both of
> >>> size (n, n). Even assuming this interpretation, you are incorrect. You
> >>> can do better than O(n^3), though it's not the easiest thing to
> >>> program:
> >>>http://en.wikipedia.org/wiki/Matrix_...rithms_for_eff....
> >> What I meant by that was that matrix multiplication of two n x n
> >> matrices is Omega(n^2) -- i.e. you can't do better than a solution of
> >> quadratic complexity (since the output matrix depends on all the
> >> elements of the two input matrices, you have to at least read all of
> >> them, which instantly makes it quadratic). Thus a quadratic solution, if
> >> one were possible, would be optimal. Actually, I'm not sure that there
> >> isn't a tighter lower bound than that for matrix multiplication, in
> >> which case a quadratic solution wouldn't actually be possible -- but I
> >> was using the example to explain what I meant (I think I failed in that,
> >> incidentally )

>
> > Sorry. I was trying to ferret out exactly what you meant in a nice
> > way.

>
> No offence taken I should have been more clear.
>
> >> What I wasn't saying was anything about the Big-O upper bound for either
> >> matrix addition or multiplication. I'm aware that you can beat O(n^3)
> >> for matrix multiplication -- e.g. Strassen's algorithm is O(n^2.807).
> >> I've never actually sat down and implemented it, though, because I've
> >> never personally needed it.

>
> > Indeed. My favorite way of looking at this is choice trees. I'm
> > fudging the formalism a little bit now, but we know that the output
> > matrix depends on every cell in the input matrices, so we know that
> > the code has to at least read each input cell. Given the instruction
> > set can only read a fixed finite number of cells in an instruction
> > (naively only 1 or 2), then we know that O(n^2) instructions are
> > required to do the matrix multiplication. The bound of O(n^2) is thus
> > a lower bound, though we don't know if any algorithm exists which can
> > do it in that bound from this proof of choice trees.

>
> I'm not totally sure what the choice tree stuff is in this context
> actually -- could you elaborate?


The best example I can think of is for proving lower bounds on sorting
algorithms. Let's take an array of N elements. For arbitrary elements,
we know that any permutation of the input is possibly the correct
output. If we model our program as being able to "choose" a subset of
the remaining possibles at every step, then we end up with our program
being modeled by a binary tree, with time going from the root to the
leaves, and each leaf representing a single possible output
permutation. The middle nodes represent subsets of size bigger than 1,
but less than the whole. Each step of the algorithm removes some of
the possible answers from the answer space until only the correct
answer remains.

It's been a while, so I'm slightly fuzzy on the exact formalism, so I
know I'm explaining it badly. I /think/ that you can say, for unknown
input, you can calculate the size of the answer space, (size X), and
for "standard instruction sets", each "step" can only take one branch
in the binary decision tree, so the minimum number of steps depends on
the height of a balanced binary tree with X leafs.

I'm not sure it exactly applied to the reasoning I attempted to use. I
think I used something analogous. I know that the algorithm must at
least read every single input cell, each read takes O(1) cycle, and
there are O(N^2) cycles, so the algorithm running time in cycles must
be at least O(N^2).

> For what it's worth, btw, I think you mean Omega(n^2) and not O(n^2) --
> since as you (correctly) state it's a lower bound:
>
> http://en.wikipedia.org/wiki/Big_O_n...Bachmann.E2.80....
>
> But that's nitpicking on my part (and I'm far from being an expert on
> complexity analysis, so I should perhaps refrain from it!).


I was being somewhat informal, and I'm not really quite sure myself. I
know what it means for a function to be Big O and Big Theta, but I'm
not sure of the exact standard commonly used. Let me take a stab at
something a little more formal:

For an instruction set in which you can read at most a fixed finite
number of matrix cells per "cycle", any correct matrix multiplication
implementation for two N by N matrices must have its running time
describable as a function f : input size -> number of cycles, where f
is little-o (N^2).

> > Also, as mentioned in the wiki link, the best we've been able to do
> > (for the given instruction set) is about n ^ ((log base 2) of (7)).

>
> Apparently there are asymptotically better algorithms that have been
> discovered, e.g.
>
> http://en.wikipedia.org/wiki/Coppers...grad_algorithm
>
> However, this one (at least) isn't used in practice, because you have to
> have really large matrices to make it worthwhile, and current hardware
> can't handle matrices of the size required.


Mmm. I was unaware. I was just going off what I read off the matrix
multiply wiki page, and we all know how reliable that is.
 
Reply With Quote
 
Stuart Golodetz
Guest
Posts: n/a
 
      09-15-2010
Joshua Maurice wrote:
> On Sep 14, 7:58 am, Stuart Golodetz <(E-Mail Removed)> wrote:
>> Joshua Maurice wrote:
>>> On Sep 8, 3:54 am, Stuart Golodetz <(E-Mail Removed)> wrote:
>>>> Joshua Maurice wrote:
>>>>> On Sep 7, 3:08 pm, Stuart Golodetz <(E-Mail Removed)> wrote:
>>>>>> Francesco S. Carta wrote:
>>>>>>> tni <(E-Mail Removed)>, on 06/09/2010 14:33:13, wrote:
>>>>>>>> On 2010-09-06 12:38, Francesco S. Carta wrote:
>>>>>>>>> Goran Pusic <(E-Mail Removed)>, on 06/09/2010 03:15:21, wrote:
>>>>>>>>>> Guys, aren't you a bit misleading with iterators and big-O and
>>>>>>>>>> stuff?
>>>>>>>>> My bad. I intentionally posted a wrong suggestion without clearly
>>>>>>>>> marking it as such - I thought I was going to be castigated immediately,
>>>>>>>>> but since the punishment didn't come at once, I kept it on to see what
>>>>>>>>> was going to happen... now I realize that it wasn't all that fun for the
>>>>>>>>> others, so I present my apologies to the group for the wasted time.
>>>>>>>> The sad thing is, I've seen quadratic stuff like that far too often in
>>>>>>>> production code. There certainly are more than enough (clueless) people
>>>>>>>> who wouldn't consider it a joke.
>>>>>>> Indeed. And since it seems that most of those programmers has no idea of
>>>>>>> what the O() notation is, simply saying that an algorithm has linear or
>>>>>>> quadratic complexity is not enough to make the point clear.
>>>>>>> Unfortunately it needs more words (and practical examples as the one you
>>>>>>> posted in the other branch) and a big fat "quadratic is BAD" sign three
>>>>>>> meters on each side, but it's worth the effort.
>>>>>> You're oversimplifying
>>>>>> It's not that quadratic algorithms per se are necessarily bad -- for
>>>>>> some problems (matrix multiplication, for example), a quadratic solution
>>>>>> may in fact be optimal. The point is that you want to avoid implementing
>>>>>> solutions that are inadequate for the problem at hand (whatever their
>>>>>> actual complexity).
>>>>> I assume you meant to say either that
>>>>> 1- O(n^2) is an optimal bound on the number of cell additions, for
>>>>> doing a matrix add of two two-dimensional matrices, both of size (n,
>>>>> n). In this case, you are correct.
>>>>> 2- O(n^3) is an optimal bound on the number of cell multiplications,
>>>>> for doing a matrix multiply of two two-dimensional matrices, both of
>>>>> size (n, n). Even assuming this interpretation, you are incorrect. You
>>>>> can do better than O(n^3), though it's not the easiest thing to
>>>>> program:
>>>>> http://en.wikipedia.org/wiki/Matrix_...rithms_for_eff....
>>>> What I meant by that was that matrix multiplication of two n x n
>>>> matrices is Omega(n^2) -- i.e. you can't do better than a solution of
>>>> quadratic complexity (since the output matrix depends on all the
>>>> elements of the two input matrices, you have to at least read all of
>>>> them, which instantly makes it quadratic). Thus a quadratic solution, if
>>>> one were possible, would be optimal. Actually, I'm not sure that there
>>>> isn't a tighter lower bound than that for matrix multiplication, in
>>>> which case a quadratic solution wouldn't actually be possible -- but I
>>>> was using the example to explain what I meant (I think I failed in that,
>>>> incidentally )
>>> Sorry. I was trying to ferret out exactly what you meant in a nice
>>> way.

>> No offence taken I should have been more clear.
>>
>>>> What I wasn't saying was anything about the Big-O upper bound for either
>>>> matrix addition or multiplication. I'm aware that you can beat O(n^3)
>>>> for matrix multiplication -- e.g. Strassen's algorithm is O(n^2.807).
>>>> I've never actually sat down and implemented it, though, because I've
>>>> never personally needed it.
>>> Indeed. My favorite way of looking at this is choice trees. I'm
>>> fudging the formalism a little bit now, but we know that the output
>>> matrix depends on every cell in the input matrices, so we know that
>>> the code has to at least read each input cell. Given the instruction
>>> set can only read a fixed finite number of cells in an instruction
>>> (naively only 1 or 2), then we know that O(n^2) instructions are
>>> required to do the matrix multiplication. The bound of O(n^2) is thus
>>> a lower bound, though we don't know if any algorithm exists which can
>>> do it in that bound from this proof of choice trees.

>> I'm not totally sure what the choice tree stuff is in this context
>> actually -- could you elaborate?

>
> The best example I can think of is for proving lower bounds on sorting
> algorithms. Let's take an array of N elements. For arbitrary elements,
> we know that any permutation of the input is possibly the correct
> output. If we model our program as being able to "choose" a subset of
> the remaining possibles at every step, then we end up with our program
> being modeled by a binary tree, with time going from the root to the
> leaves, and each leaf representing a single possible output
> permutation. The middle nodes represent subsets of size bigger than 1,
> but less than the whole. Each step of the algorithm removes some of
> the possible answers from the answer space until only the correct
> answer remains.
>
> It's been a while, so I'm slightly fuzzy on the exact formalism, so I
> know I'm explaining it badly. I /think/ that you can say, for unknown
> input, you can calculate the size of the answer space, (size X), and
> for "standard instruction sets", each "step" can only take one branch
> in the binary decision tree, so the minimum number of steps depends on
> the height of a balanced binary tree with X leafs.


Thanks, I think I sort of see what you're getting at -- would need to
look at it in more detail to properly understand it though I think
(perhaps when it's not 1am ) I should add that the explanation itself
isn't the problem so much as the ability of my tired brain to understand it!

> I'm not sure it exactly applied to the reasoning I attempted to use. I
> think I used something analogous. I know that the algorithm must at
> least read every single input cell, each read takes O(1) cycle, and
> there are O(N^2) cycles, so the algorithm running time in cycles must
> be at least O(N^2).
>
>> For what it's worth, btw, I think you mean Omega(n^2) and not O(n^2) --
>> since as you (correctly) state it's a lower bound:
>>
>> http://en.wikipedia.org/wiki/Big_O_n...Bachmann.E2.80....
>>
>> But that's nitpicking on my part (and I'm far from being an expert on
>> complexity analysis, so I should perhaps refrain from it!).

>
> I was being somewhat informal, and I'm not really quite sure myself. I
> know what it means for a function to be Big O and Big Theta, but I'm
> not sure of the exact standard commonly used. Let me take a stab at
> something a little more formal:
>
> For an instruction set in which you can read at most a fixed finite
> number of matrix cells per "cycle", any correct matrix multiplication
> implementation for two N by N matrices must have its running time
> describable as a function f : input size -> number of cycles, where f
> is little-o (N^2).


To the best of my knowledge, little-o (as I've seen it used) is still an
upper bound -- specifically, it's a strict upper bound, whereas Big-O is
a weak one. For instance, f is O(n^2) means that f is no worse than
quadratic, whereas f is o(n^2) means that f is better than quadratic.
For lower bounds, Omega(n^2) means that f is no better than quadratic,
whereas omega(n^2) means that f is worse than quadratic. In this case,
it's clear that matrix multiplication is Omega(n^2), i.e. no better than
quadratic, but it's not clear at first glance that it's omega(n^2) --
although it may well be.

>>> Also, as mentioned in the wiki link, the best we've been able to do
>>> (for the given instruction set) is about n ^ ((log base 2) of (7)).

>> Apparently there are asymptotically better algorithms that have been
>> discovered, e.g.
>>
>> http://en.wikipedia.org/wiki/Coppers...grad_algorithm
>>
>> However, this one (at least) isn't used in practice, because you have to
>> have really large matrices to make it worthwhile, and current hardware
>> can't handle matrices of the size required.

>
> Mmm. I was unaware. I was just going off what I read off the matrix
> multiply wiki page, and we all know how reliable that is.


Likewise Just a different wiki page to the one you were reading -- I
guess there should have been a link between the two which someone forgot
to add. Just thought I'd give you the link in case you were interested(!)

Best,
Stu
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
(const char *cp) and (char *p) are consistent type, (const char **cpp) and (char **pp) are not consistent lovecreatesbeauty C Programming 1 05-09-2006 08:01 AM
"stringObj == null" vs "stringObj.equals(null)", for null check?? qazmlp1209@rediffmail.com Java 5 03-29-2006 10:37 PM
/usr/bin/ld: ../../dist/lib/libjsdombase_s.a(BlockGrouper.o)(.text+0x98): unresolvable relocation against symbol `std::basic_ostream<char, std::char_traits<char> >& std::endl<char, std::char_traits<char> >(std::basic_ostre silverburgh.meryl@gmail.com C++ 3 03-09-2006 12:14 AM
The difference between char a[6] and char *p=new char[6] ? wwj C Programming 24 11-07-2003 05:27 PM
the difference between char a[6] and char *p=new char[6] . wwj C++ 7 11-05-2003 12:59 AM



Advertisments