Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Perl > Using more than 2G of memory with PERL

Reply
Thread Tools

Using more than 2G of memory with PERL

 
 
Gary Harvey
Guest
Posts: n/a
 
      02-07-2005
I have a data intensive program that requires all data to be present in
memory. I keep running out of memory at about 2G whenever I run my program.
I tried using a 64 bit version of Perl and hit the same limit even though
the memory on the machine was 8G. Even with 32 bit addresses, I should be
able to use 4G if it is available on the machine. How can I get my perl
programs to access more than 2G of memory? Thanks.





 
Reply With Quote
 
 
 
 
Martin Gregory
Guest
Posts: n/a
 
      02-07-2005
Gary Harvey wrote:
> I have a data intensive program that requires all data to be present in
> memory. I keep running out of memory at about 2G whenever I run my program.
> I tried using a 64 bit version of Perl and hit the same limit even though
> the memory on the machine was 8G. Even with 32 bit addresses, I should be
> able to use 4G if it is available on the machine. How can I get my perl
> programs to access more than 2G of memory? Thanks.


Can you post a snippet of code to show the circumstances
where you run out of memory - how you know that this is
what the problem is, and what you did to cause it...
 
Reply With Quote
 
 
 
 
Gary Harvey
Guest
Posts: n/a
 
      02-10-2005

"Martin Gregory" <(E-Mail Removed)> wrote in message
news:cu8ush$if9$(E-Mail Removed)...
> Gary Harvey wrote:
> > I have a data intensive program that requires all data to be present in
> > memory. I keep running out of memory at about 2G whenever I run my

program.
> > I tried using a 64 bit version of Perl and hit the same limit even

though
> > the memory on the machine was 8G. Even with 32 bit addresses, I should

be
> > able to use 4G if it is available on the machine. How can I get my perl
> > programs to access more than 2G of memory? Thanks.

>
> Can you post a snippet of code to show the circumstances
> where you run out of memory - how you know that this is
> what the problem is, and what you did to cause it...



I am getting an out of memory message. I have watched the process with top
and have seen it approach 2G before running out of memory. This is a rather
long code snippet; but, where I run out of memory varies depending on the
data

---

foreach $testcase (@Test_Cases)
{
$testcase_dir = $base_dir . $forward_slash . $testcase;
if ( open(TCOVD,$testcase_dir . $forward_slash . "rd" . $forward_slash
.. "tcovd") )
{
if ($PrintProgress eq "TRUE")
{
print PROGRESS $testcase . "\n";
print $testcase . "\n";
}
@tcovd_lines = <TCOVD>;
chomp(@tcovd_lines);
close(TCOVD);
# The logic below is designed to do the following:
# If the function specified is "EvaluateUsage" then we need to
# allow for zero hit count blocks. Because of limited memory,
# we cannot keep the zero blocks for each test case. This is
# OK since every tcovd file should list all instrumented files and
# their blocks whether we hit them in a test case or not; so, we
will
# allow for zero hit count blocks for only the first test case and
only
# if "EvaluateUsage" is the specified function.
$RegExp = "";
if ($EvaluateUnreferencedBlocks eq "TRUE" && $IsFirstTestcase eq
"TRUE" )
{
$IsFirstTestcase = "FALSE";
$RegExp = $Initial_BlockLevel_RegExp;
}
else
{
$RegExp = $Subsequent_BlockLevel_RegExp;
}
for ($i = 0; $i < @tcovd_lines
{
# If we have encountered the beginning of source block data
# and the source block data is a ".f", ".cxx", or ".c" file;
# then, there might be block data that needs to be stored in
# the global hashe(s).
if ( $tcovd_lines[$i]=~/SRCFILE/ && ($tcovd_lines[$i]=~/\.cxx/
|| $tcovd_lines[$i]=~/\.c/ || $tcovd_lines[$i]=~/\.f/) )
{
# This confusing bit of code extracts the name of the source
library
# and the name of the file from the SRCFILE line
@temp_array_a=split(/\s+/,$tcovd_lines[$i]);
$fullpath_sourcename = $temp_array_a[1];
@temp_array_b=split(/\//,$fullpath_sourcename);
$temp_size = @temp_array_b;
$library = $temp_array_b[($temp_size -2)];
$source = $temp_array_b[($temp_size -1)];

if ($DEBUG eq "TRUE")
{
if (exists $Libraries{$library})
{
printf("%s\t%s\n",$library,$source);
}
}

$i++;
# Now that we have a library and file name the next lines
# should contain block data. We only want file and block
# data for the libraries that we are interested in
if ($AccumulateAllLibraries eq "TRUE" || exists
$Libraries{$library})
{
$this_file_has_blocks_hit = "FALSE";
$this_file_has_blocks = "FALSE";
%file_hash = ();
while ($tcovd_lines[$i]=~/^\t\t[0-9]+\t[0-9]+/)
{
if ( $tcovd_lines[$i]=~$RegExp )
{
$this_file_has_blocks_hit = "TRUE";
$this_file_has_blocks = "TRUE";
@temp_array_d=split(/\s+/,$tcovd_lines[$i]);
# OK, now we have to account for the same block
being
# listed multiple times under the same source
listing!!!
if ( exists $file_hash{$temp_array_d[1]} )
{
# Adds to the existing count
$file_hash{$temp_array_d[1]} += $temp_array_d[2];
}
else
{
# Adds new entry to hash with count
$file_hash{$temp_array_d[1]} = $temp_array_d[2];
}
}
$i++;
}
# At this point, if $this_file_has_blocks_hit is "TRUE",
# then we need to add the %file_hash to the %source_hash
# I'm leaving the ".tcov" off of the filename to save a
few
# bytes since it doesn't really add any value anyway
%temp_source_hash = ();
$block = "";
$count = "";
# Changed this test for the EvaluateUsage fuctionality.
if ( $this_file_has_blocks_hit eq "TRUE" || (
$EvaluateUnreferencedBlocks eq "TRUE" && $this_file_has_blocks eq "TRUE") )
{
$filename = $library . $forward_slash . $source;
# If this $filename already exists in $source_hash,
# then $source_hash{$filename} already has a %file_hash
# from a preceeding entry of the file. We don't want
to
# overwrite this, so we must append to it.
if ( exists $source_hash{$filename} )
{
%temp_file_hash = ();
$block = "";
$count = "";
# We already know that this $filename already has a
hash
# with some blocks in it. Now, for each of the new
blocks
# that we have collected in %file_hash, find out
which ones
# already exist in $source_hash{$filename}
foreach $block ( keys %file_hash )
{
# For existing blocks, add the new hit number to
the existing one
if ( exists $source_hash{$filename}{$block} )
{
$source_hash{$filename}{$block} +=
$file_hash{$block}
}
# Other blocks get put into $temp_hash_source to
be appended
# to $source_hash{$filename}
else
{
$temp_file_hash{$block} = $file_hash{$block};
}
}
$source_hash{$filename} = {
%temp_file_hash,%{$source_hash{$filename}} };
}
else
{
$source_hash{$filename} = { %file_hash };
}
}
} #end if (exists $Libraries{$library})

# Else, this is in a library that we don't care about so just
skip past the data
else
{
while ($tcovd_lines[$i]=~/^\t\t[0-9]+\t[0-9]+/)
{$i++;}
}

} #end if ( $tcovd_lines[$i]=~/SRCFILE/...

else
{
if ( $Debug eq "TRUE" )
{
printf("ELSE: %s\n",$tcovd_lines[$i]);
}
$i++
}

} #end for
$testcase_hash{$testcase} = {%source_hash};
%source_hash = ();
} #end if (open(TCOVD...
else
{
printf("No tcovd data for %s\n",$testcase);
}


} #end foreach $testcase



 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Like all great travelers, I have seen more than I remember andremember more than I have seen. shenrilaa@gmail.com Java 0 03-06-2008 08:11 AM
Like all great travelers, I have seen more than I remember andremember more than I have seen. shenrilaa@gmail.com C++ 0 03-05-2008 08:41 AM
Like all great travelers, I have seen more than I remember andremember more than I have seen. shenrilaa@gmail.com C Programming 0 03-05-2008 03:26 AM
Issue in executing more than one perl script from Single perl script nilesh.sonawane@gmail.com Perl Misc 3 11-01-2007 06:19 PM
Using perl/shell scripts to kill processes more than x days old Jimmy Perl Misc 3 06-30-2003 01:42 AM



Advertisments