Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Ruby > segmentation fault on long running script (linux)

Reply
Thread Tools

segmentation fault on long running script (linux)

 
 
Vladimir Konrad
Guest
Posts: n/a
 
      02-26-2007

i have a long running script which takes a lot netcdf files and
generates SQL which get's piped to another process for database import.
the script failed about 1/2 way through (after about 5 days) with
segmentation fault.

is there a way to debug segfaulting script (i.e. is there a way to
generate core file) so i can find out more?

my understanding is that segmentation fault is usually caused by
addressing memory which does not belong to the process (bad pointer and
such).

--- using
ruby 1.8.2 (2005-04-11) [i386-linux]
---
vlad

--
Posted via http://www.ruby-forum.com/.

 
Reply With Quote
 
 
 
 
Jan Svitok
Guest
Posts: n/a
 
      02-26-2007
On 2/26/07, Vladimir Konrad <(E-Mail Removed)> wrote:
>
> i have a long running script which takes a lot netcdf files and
> generates SQL which get's piped to another process for database import.
> the script failed about 1/2 way through (after about 5 days) with
> segmentation fault.
>
> is there a way to debug segfaulting script (i.e. is there a way to
> generate core file) so i can find out more?
>
> my understanding is that segmentation fault is usually caused by
> addressing memory which does not belong to the process (bad pointer and
> such).
>
> --- using
> ruby 1.8.2 (2005-04-11) [i386-linux]
> ---
> vlad


I suppose that linux makes core dumps unless it is told not to do
(using ulimit). You can inspect the core file with gdb. It may be
helpful to have symbols table available (i.e. not stripped ruby).

Now, don't take this as too accurate... Last time I debugged a core
file was in 1999...

 
Reply With Quote
 
 
 
 
Brian Candler
Guest
Posts: n/a
 
      02-26-2007
On Tue, Feb 27, 2007 at 12:21:08AM +0900, Jan Svitok wrote:
> I suppose that linux makes core dumps unless it is told not to do
> (using ulimit).


Or the process is setuid, unless you enable dumping of setuid processes with
sysctl. By default:

kernel.suid_dumpable = 0

Note that many Linux systems have the core size ulimit set to 0 by default.
I get

$ ulimit -a | grep core
core file size (blocks, -c) 0

on both Ubuntu 6.06 and CentOS 4.4.

Regards,

Brian.

 
Reply With Quote
 
Vladimir Konrad
Guest
Posts: n/a
 
      02-26-2007
> $ ulimit -a | grep core
> core file size (blocks, -c) 0


this settings is the "culprit" i think, (this is on debian sarge).

thank you all very much for the pointers.

vlad

--
Posted via http://www.ruby-forum.com/.

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Getting a "Segmentation Fault" when running on Linux Maxx C Programming 17 01-02-2012 05:31 PM
Segmentation fault but now errors when running Valgrind carl C++ 5 12-01-2009 08:49 AM
Segmentation fault when running Rails on Windows Server 2008 Arrix Ruby 1 08-16-2008 05:23 AM
Having compilation error: no match for call to (const __gnu_cxx::hash<long long int>) (const long long int&) veryhotsausage C++ 1 07-04-2008 05:41 PM
Segmentation fault, proc, eval, long string Bob Hutchison Ruby 26 12-04-2006 01:36 PM



Advertisments